Recurrent networks have been used as neural models of language processing, with mixed results. Here, we discuss the role of recurrent networksin a neural architecture of grounded cognition. In particular, we discuss how thecontrol of binding in this architecture can be learned. We trained a simplerecurrent network (SRN) and a feedforward network (FFN) for this task. Theresults show that information from the architecture is needed as input for thesenetworks to learn control of binding. Thus, both control systems are recurrent.We found that the recurrent system consisting of the architecture and an SRN oran FFN as a ‘core’ can learn basic (but recursive) sentence structures. Problemswith control of binding arise when the system with the SRN is tested on numberof new sentence structures. In contrast, control of binding for these structuressucceeds with the FFN. Yet, for some structures with (unlimited) embeddings,difficulties arise due to dynamical binding conflicts in the architecture itself. Inclosing, we discuss potential future developments of the architecture presentedhere.
【 预 览 】
附件列表
Files
Size
Format
View
The role of recurrent networks in neural architectures