学位论文详细信息
Mechanisms of hippocampal relational memory binding
Hippocampus;Relational Memory;Medial temporal lobe;computational modeling;Binding;neural oscillations;memory consolidation
Watson, Patrick
关键词: Hippocampus;    Relational Memory;    Medial temporal lobe;    computational modeling;    Binding;    neural oscillations;    memory consolidation;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/44372/Patrick_Watson.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

This work is about the mental representations that underlie memory for the complex compositions of people, places, things, and events that comprise everyday mnemonic experience, the mechanisms that bind, encode, and reconstruct these representations, and the mathematical frameworks that describe these mechanisms.It approaches this topic with a combination of computation modeling, human neuropsychological empirical research, and scholarly theory building.The critical components are 1) a model and discussion of memory consolidation, 2) reconstructive memory experiments, both in patients with damage to the hippocampus and in college-aged participants, 3) a pair of computational models of relational memory binding, encoding, and reconstruction. These experiments all touch on a larger debate about memory representations that dates back at least to Bartlett (1932), and touches on questions such as: “What different types of memories are there?” “Are these memories more akin to reconstructions or recordings?” “How does time and experience change these representations” and “What are the mechanisms and brain structures responsible for encoding, updating, and reconstructing these representations?”Part 1: Memory ConsolidationThe first critical component gives an overview of memory consolidation research and argues that the literature has two divergent definitions for memory consolidation: a narrow definition focused solely on how memories are protected from amnestic insult, and a broader definition that considers all kinds of representational change over time as examples of memory consolidation.Two consolidation models, one aimed at each of these definitions are presented.The first model explains the narrow definition of memory consolidation for memories of any type (episodic, semantic, procedural etc.) and of any scale (molecular, synaptic, or systems-level) by demonstrating that in any system that has 1) multiple loci for storing information, and 2) mechanisms that transfer or copy information between these loci, information will tend to move from local to global representations.This means that any amnestic disruption, whether it attacks molecular pathways (e.g., neurotransmitters, protein synthesis), individual neurons or synapses, entire brain structures, or even networks of structures, will become less effective with time as the information it seeks to disrupt becomes more globally represented.Since the notion of multiple interacting memory systems in the brain is well established, and the model can be tuned with very minimal parameters to fit any arbitrary retrograde amnesia gradient there is no need to explain memory consolidation-as-protection-from-disruption with reference to any specific brain process, as it is an obligatory result of simply possessing a brain with multiple, interacting components.The second, broader, consolidation definition however, requires a broader explanation that varies depending upon the type of memory representation in question.The second model presents an example of memory change over time based on previous explorations of memory consolidation and interference in the hippocampus and necortex (McCloskey & Cohen 1989, McClelland, McNaughton & O’Reilly 1995, Ans et al. 2002).Unlike some of these previous works, it concludes that because the neocortical system is only learning the statistical structure of its inputs, hippocampal interleaved learning can only prevent interference in the neocortex to the degree that it creates artificial structure via temporal ordering and repetitive exposure.In a modeling context, it is of course always possible to tune the degree of ordering and repetition to match the observed result, but this does not seem very informative as to the real mechanism involved.The hippocampus cannot know what the neocortex “ought” to remember a priori, before the neocortex attempts to recall the salient fact.Thus the chapter concludes that the hippocampal memory mechanism must be more elegant than a simple copy-theory that exposes the neocortex to repeated instances of previous experience, because such a system would still require a homunculus that decides precisely how often previous experiences should be repeated. Part 2: Reconstruction ExperimentsThe second critical component is comprised of two empirical, cognitive psychology studies of hippocampal relational memory binding using novel reconstruction paradigms.Memory for complex, compositions of items and relations is most often measured using manipulated images or configurations of items (e.g., Ryan et al. 2000, Hannula, Tranel, & Cohen 2006).This allows the item information to remain constant, while the specific composition of items is manipulated.Participants typically demonstrate memory in such experiments by detecting manipulations.Yet while these experiments are effective at detecting disruptions to relational representations, they do not communicate what the change to the underlying representation is that causes it to diverge from the originally studied configuration.To allow participants to report what their mental representation looks like requires a reconstructive memory paradigm (c.f. Bartlett 1932), the results of which are often difficult to quantify.The second component presents two studies that attempt to find a middle ground between open-ended reconstruction and controlled quantifiability in hopes of developing richer relational memory datasets.The first of these experiments involved a simple spatial reconstruction paradigm.Patients with hippocampal damage at the University of Iowa or age and education matched controls studied an array of 2-5 everyday objects placed at random locations on a 1x1m table and then tried to reposition the objects in their studied locations after a brief (4s) eyes-closed delay.Previous experiments measured performance in this task exclusively with an item misplacement measure (how many cm away from their studied locations items were placed at reconstruction c.f. Smith & Milner 1981), however we found that swapping the locations of pairs of items was far more indicative of hippocampal damage, with patients making numerous errors of this type while controls made only a single such error in the entire course of testing.Patients made swaps error even on two object arrays, and the prevalence of swapping could not be explained simply by poor performance on other metrics, though it contributed heavily to poor performance on all of the other metrics we examined.These findings suggested that the primary deficit in patients with hippocampal damage in a spatial reconstruction task was an inability to correctly bind individual item identities to their relative locations, and not a more general failure of spatial or declarative memory.The fact that these deficits are observable even at short lags traditionally associated with working memory and even with item sets as small as two additionally argues that hippocampal damage is not simply a disruption of transfer from working to long term memory.Finally, while the rate of swapping increased as the number of items increased, it did not increase faster than the number of pairwise relations present in the stimuli, suggesting that this error is directly tied to the relations between elements.The second experiment of the second component was designed to more thoroughly explore these swap errors.Building on the first spatial reconstruction experiment, this second experiment required college-aged participants to reconstruct a short movie composed of a set of six face-background pairings, placing each face and background in their studied location, and in the correct temporal sequence.Unlike the previous experiment where participants could position objects at any location within a 1m square, thereby producing different kinds of spatial errors, this “event reconstruction” paradigm had a finite and clearly delineated number of slots for each element to be bound to, allowing only for swap errors, and making possible robust similarity analysis to determine how many adjustments to the participants’ reconstructed configuration would be required to convert the reconstruction to the originally studied configuration. In examining participants reconstruction accuracy (that is, how many elements of their reconstructed configuration were correctly bound), our central finding was that performance was tightly linked to relational complexity (i.e., bindings between sets with large numbers of elements were more difficult than those between sets with small numbers of elements), and arbitrariness (i.e., patterns of binding that were consistent across trials produced better performance than inconsistent patterns), and that both of these effects were closely related to the degree to which participants’ performance could be predicted from their prior patterns of reconstructions (i.e., reconstruction “semantics”). However, once these two factors were controlled for we did not find a strong effect of the type of binding (e.g., spatial-spatial, v. item-item).Additionally, using similarity analysis we were able to demonstrate that while participants’ reconstructions were dramatically better than random performance, they were only slightly more similar to the studied configuration than they were to reconstructions created before the participant saw the studied configuration.We were additionally able to demonstrate that the general “semantic” tendencies of the participant enabled them to encode approximately 12 bindings more than would be expected by chance, while the “episodic” information encoded on each trial amounted to approximately 3 additional bindings.We also demonstrated that this additional information was not simply present in the accuracy of the initial configuration but that the “errors” participants made were non-random, and contained informational structure similar to that of the original studied configuration. This study highlighted the dynamic and synergistic way in which new and previously learned information interact to provide a useful set of constraints capable of (re)creating a complex configuration of items, locations, and times, and reaffirmed the importance of examining not just how memory drives correct performance, but also how memory contributes to non-random errors. Part 3: Computational Models of Hippocampal FunctionThe first of two computational models presented is the Memory and Reasoning (M&R) computational model produced in collaboration with investigators at Sandia National Laboratories.This system was meant to simulate the processing of visual sensory streams and the hippocampus.It was composed of two cortical components, both of which were composed of hierarchically arranged adaptive resonance theory (ART) networks that used an unsupervised learning algorithm to capture the statistical structure of complex visual inputs.One of these components acted on high-resolution pictures of faces (meant to be analogous to objects in the fovea), while the other acted on low-resolution pictures of scenes (meant to be analogous to the lower resolution visual surround).Given a face-scene pairing, the first component simulated the processing of the ventral stream, parsing the complex images into simple visual features, then higher-order structural features, and finally into objects corresponding to the individual faces.The second component simulated the processing of the dorsal visual stream, parsing its inputs into spatial features (e.g., “objects of any kind on the left”).Both of these streams were equipped with “recall” capabilities, if a single unit corresponding to a particular input or input category was activated, the recurrent connections within the ART network that represented that category would produce the “prototypical” input that would elicit the activation of that category.This input would in turn activate ART networks lower down in the hierarchy in the same fashion until the network would print out at the sensory camera (its “mind’s eye”) a visual configuration that corresponded to the originally activated component’s input category.In this way the system could be a “pictures in/pictures out” searchable database for visual images.Augmenting this function was the hippocampal component that bound together information from both the “dorsal” and “ventral” components.It did this by passing the high-level activation of both components into a high-dimensional space (meant to simulate the dentate gyrus) to obtain a unique “pattern separated” key corresponding to the particular input conjunction.It then washed this key through a heavily locally recurrent component (meant to simulate CA3) to fill in any gaps, and then mapped this output back to the two cortical components via a third hippocampal component (meant to simulate CA1). In this way faces could be “shown” to the model to elicit the “scene” they were studied with and vice versa, allowing the model to complete source memory style tasks.By tuning the time at which CA1 performed its reactivation of the cortical components it was also possible to recover additional face-scene pairings that were studied in close together in time to the original cue, and even entire sequences of studied face-scene pairings.This model was therefore able to recreate much of the performance, and even subjective experience, of relational memory tasks, but it lacked much of the flexibility of relational memory.It could not create face-face bindings, or perform transitive inference, or create novel bindings.In essence, the hippocampal component was performing the same type of category learning as the two cortical components, but it was learning highly specific, cross-domain categories.The limits of the M&R model motivated the final model of the document the relational memory binding, encoding, and reactivation (RMBER) model.Produced in collaboration with the FRAMES team of the IARPA ICArUS project, this model was meant to perform flexible relational binding of complex compositions of stimuli.Structurally, it closely resembled the M&R model, with a cortical-inspired input/output region (the entorhinal cortex, EC layers 2 and 5 respectively), a dentate gyrus (DG) with a large number of units relative to its inputs, a highly recurrent CA3 region, and a CA1 region that performed mappings between CA3 and the EC.However, unlike the previous model the RMBER model used Mihalas-Niebur spiking neurons (Mihalas & Niebur 2009), to model the actual oscillatory dynamics of neurons.These neurons used spike-timing dependent plasticity that tuned the strength of neural connections but also the degree to which neurons were coupled with inhibitory interneurons.In this way, input could modulate both the rate that a unit fires, and the delay at which it fires in response to the input.In addition, all units in the model were subject to an extrinsically generated theta rhythm, and a locally generated gamma rhythm.This allowed the model to do more than simply generate conjunctions corresponding to previous inputs. First, the model treats dynamics of the entire entorhinal cortex as a superposition of the activity of the entorhinal cortical units.It maps these dynamics into the high dimensional space of the dentate gyrus where complex dynamics of the entorhinal cortex are decomposed across a large number of cells, resulting in dentate cells which respond only to particular sub-frequencies of the entorhinal dynamics, and only when presented at certain phase delays (relative to the beginning of a theta or gamma cycle).These dentate dynamics are collapsed back into the CA3 region, that by summing across large numbers of dentate cells’ activity and by reconstructing their signals within its highly-recurrent local network, recreates the superposition of single unit dynamics present in the EC2.These dynamics are mapped back to EC5 via CA1.Processing within each sub-region requires one gamma cycle ensuring that the output from CA1 arrives back at EC5 at the beginning of the next theta cycle, thus providing the hippocampal network’s predicted dynamics for the next theta cycle.This process shares much in common with the discrete Fourier transform, decomposing a complex signal into its phase and frequency subcomponents, discarding the higher order frequencies and recomposing the original dynamics from a compressed code relying upon the stored coefficients.However, while the hippocampal model initially fills in gaps in the signal with Gaussian random noise, over time it learns to fill in gaps with previously stored it fills in gaps with previously stored phase and frequency sub-components.Since these sub-components are derived from the observed activity in the EC they reflect real relationships present in the activity of EC neurons.Since the model stores both phase and frequency of previous dynamics, it creates a truly compositional, concatenative code that reproduces the appropriate level of activity at the appropriate EC units in the same order as previously observed. Thus, unlike the rigid configural categories of the previous M&R model, the RMBER model can flexibly add and remove sub-components from the entorhinal dynamics while maintaining the correct relative order of activations.We show that the model is capable of learning simple rules, and complex patterns of geometric relations such as path integration.We generate a “virtual rat” and have it run in a virtual circular enclosure according to a random path, and then allow the hippocampal model to reconstruct this path from a partial cue.We also demonstrate that the model develops many of the same cell types observed in single-cell recording studies (e.g., “place” and “time” cells).This model provides a novel way of understanding the hippocampus’s relational binding function by relating its intrinsic dynamics to the discrete Fourier transform.Together, these papers outline the need for a specialized memory system devoted to binding compositions of independent elements, experimental evidence for the existence of such a system, and computational mechanisms by which such a system might act.

【 预 览 】
附件列表
Files Size Format View
Mechanisms of hippocampal relational memory binding 5619KB PDF download
  文献评价指标  
  下载次数:10次 浏览次数:23次