The integration of adaptive functions within analog neural hardware, while certainly promising to enhance system performance, has for long been hindered by technological difficulties due to the complexity and sensitivity of standard adaptive algorithms. We present a general framework for self-contained adaptation in analog VLSI supporting a broad class of supervised learning and optimization tasks, which largely alleviates the implementation problems by virtue of a robust system approach exploiting statistics and redundancy in stochastic processes. Specifically, the framework includes: i) a perturbative algorithm based on stochastic approximation to optimize a set of parameters in an arbitrary deterministic system, these parameters being adjusted according to global performance evaluations rather than using explicit knowledge about the internal structure of the system; and ii) a scalable and modular CMOS architecture that implements this algorithm, and that additionally provides for embedded long-term dynamic storage of the volatile analog parameter values, quantized locally and refreshed autonomously on capacitors with direct external access in both digital and analog formats. We analyze the convergence and scaling properties of the stochastic algorithm, present on-line versions of the algorithm for supervised learning in dynamical systems, and provide experimental results demonstrating real-time trajectory learning on an analog CMOS chip containing a network of six fully recurrent dynamical neurons. We also include results demonstrating robust long-term retention of locally stored volatile information in analog VLSI using the autonomous refresh technique.
【 预 览 】
附件列表
Files
Size
Format
View
Analog VLSI autonomous systems for learning and optimization