• 已选条件:
  • × 期刊论文
  • × Neuroscience
  • × 2023
 全选  【符合条件的数据共:2082条】

Frontiers in Computational Neuroscience,2023年

Rachel V. Quinarez, Joseph Schmalz, Mayuresh V. Kothare, Gautam Kumar

LicenseType:Unknown |

预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

Epileptic seizure is typically characterized by highly synchronized episodes of neural activity. Existing stimulation therapies focus purely on suppressing the pathologically synchronized neuronal firing patterns during the ictal (seizure) period. While these strategies are effective in suppressing seizures when they occur, they fail to prevent the re-emergence of seizures once the stimulation is turned off. Previously, we developed a novel neurostimulation motif, which we refer to as “Forced Temporal Spike-Time Stimulation” (FTSTS) that has shown remarkable promise in long-lasting desynchronization of excessively synchronized neuronal firing patterns by harnessing synaptic plasticity. In this paper, we build upon this prior work by optimizing the parameters of the FTSTS protocol in order to efficiently desynchronize the pathologically synchronous neuronal firing patterns that occur during epileptic seizures using a recently published computational model of neocortical-onset seizures. We show that the FTSTS protocol applied during the ictal period can modify the excitatory-to-inhibitory synaptic weight in order to effectively desynchronize the pathological neuronal firing patterns even after the ictal period. Our investigation opens the door to a possible new neurostimulation therapy for epilepsy.

    Frontiers in Computational Neuroscience,2023年

    Bertrand Reulet, Emmanuel Calvet, Jean Rouat

    LicenseType:Unknown |

    预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

    Reservoir computing provides a time and cost-efficient alternative to traditional learning methods. Critical regimes, known as the “edge of chaos,” have been found to optimize computational performance in binary neural networks. However, little attention has been devoted to studying reservoir-to-reservoir variability when investigating the link between connectivity, dynamics, and performance. As physical reservoir computers become more prevalent, developing a systematic approach to network design is crucial. In this article, we examine Random Boolean Networks (RBNs) and demonstrate that specific distribution parameters can lead to diverse dynamics near critical points. We identify distinct dynamical attractors and quantify their statistics, revealing that most reservoirs possess a dominant attractor. We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance. In comparison, a negative inhibitory balance delivers another critical point with better prediction performance. Interestingly, we show that the intrinsic attractor dynamics have little influence on performance in either case.

      Frontiers in Computational Neuroscience,2023年

      Ikhwan Jeon, Taegon Kim

      LicenseType:Unknown |

      预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

      Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.

        Frontiers in Computational Neuroscience,2023年

        Robrecht Raedt, Wout Joseph, Thomas Tarnaud, Emmeric Tanghe, Laila Weyn, Ruben Schoeters

        LicenseType:Unknown |

        预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

        IntroductionOptogenetics has emerged as a promising technique for modulating neuronal activity and holds potential for the treatment of neurological disorders such as temporal lobe epilepsy (TLE). However, clinical translation still faces many challenges. This in-silico study aims to enhance the understanding of optogenetic excitability in CA1 cells and to identify strategies for improving stimulation protocols.MethodsEmploying state-of-the-art computational models coupled with Monte Carlo simulated light propagation, the optogenetic excitability of four CA1 cells, two pyramidal and two interneurons, expressing ChR2(H134R) is investigated.Results and discussionThe results demonstrate that confining the opsin to specific neuronal membrane compartments significantly improves excitability. An improvement is also achieved by focusing the light beam on the most excitable cell region. Moreover, the perpendicular orientation of the optical fiber relative to the somato-dendritic axis yields superior results. Inter-cell variability is observed, highlighting the importance of considering neuron degeneracy when designing optogenetic tools. Opsin confinement to the basal dendrites of the pyramidal cells renders the neuron the most excitable. A global sensitivity analysis identified opsin location and expression level as having the greatest impact on simulation outcomes. The error reduction of simulation outcome due to coupling of neuron modeling with light propagation is shown. The results promote spatial confinement and increased opsin expression levels as important improvement strategies. On the other hand, uncertainties in these parameters limit precise determination of the irradiance thresholds. This study provides valuable insights on optogenetic excitability of CA1 cells useful for the development of improved optogenetic stimulation protocols for, for instance, TLE treatment.

          Frontiers in Computational Neuroscience,2023年

          Omid Madani

          LicenseType:Unknown |

          预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]    

          How do humans learn the regularities of their complex noisy world in a robust manner? There is ample evidence that much of this learning and development occurs in an unsupervised fashion via interactions with the environment. Both the structure of the world as well as the brain appear hierarchical in a number of ways, and structured hierarchical representations offer potential benefits for efficient learning and organization of knowledge, such as concepts (patterns) sharing parts (subpatterns), and for providing a foundation for symbolic computation and language. A major question arises: what drives the processes behind acquiring such hierarchical spatiotemporal concepts? We posit that the goal of advancing one's predictions is a major driver for learning such hierarchies and introduce an information-theoretic score that shows promise in guiding the processes, and, in particular, motivating the learner to build larger concepts. We have been exploring the challenges of building an integrated learning and developing system within the framework of prediction games, wherein concepts serve as (1) predictors, (2) targets of prediction, and (3) building blocks for future higher-level concepts. Our current implementation works on raw text: it begins at a low level, such as characters, which are the hardwired or primitive concepts, and grows its vocabulary of networked hierarchical concepts over time. Concepts are strings or n-grams in our current realization, but we hope to relax this limitation, e.g., to a larger subclass of finite automata. After an overview of the current system, we focus on the score, named CORE. CORE is based on comparing the prediction performance of the system with a simple baseline system that is limited to predicting with the primitives. CORE incorporates a tradeoff between how strongly a concept is predicted (or how well it fits its context, i.e., nearby predicted concepts) vs. how well it matches the (ground) “reality,” i.e., the lowest level observations (the characters in the input episode). CORE is applicable to generative models such as probabilistic finite state machines (beyond strings). We highlight a few properties of CORE with examples. The learning is scalable and open-ended. For instance, thousands of concepts are learned after hundreds of thousands of episodes. We give examples of what is learned, and we also empirically compare with transformer neural networks and n-gram language models to situate the current implementation with respect to state-of-the-art and to further illustrate the similarities and differences with existing techniques. We touch on a variety of challenges and promising future directions in advancing the approach, in particular, the challenge of learning concepts with a more sophisticated structure.

            Frontiers in Computational Neuroscience,2023年

            Florian Röhrbein, Mario Senden

            LicenseType:Unknown |

            预览  |  原文链接  |  全文  [ 浏览:0 下载:0  ]