期刊论文详细信息
STOCHASTIC PROCESSES AND THEIR APPLICATIONS 卷:109
Time to absorption in discounted reinforcement models
Article
Pemantle, R ; Skyrms, B
关键词: network;    social network;    urn model;    Friedman urn;    stochastic approximation;    meta-stable;    trap;    three-player game;    potential well;    exponential time;    quasi-stationary;   
DOI  :  10.1016/j.spa.2003.08.003
来源: Elsevier
PDF
【 摘 要 】

Reinforcement schemes are a class of non-Markovian stochastic processes. Their non-Markovian nature allows them to model some kind of memory of the past. One subclass of such models are those in which the past is exponentially discounted or forgotten. Often, models in this subclass have the property of becoming trapped with probability I in some degenerate state. While previous work has concentrated on such limit results, we concentrate here on a contrary effect, namely that the time to become trapped may increase exponentially in 1/x as the discount rate, 1-x, approaches 1. As a result, the time to become trapped may easily exceed the lifetime of the simulation or of the physical data being modeled. In such a case, the quasi-stationary behavior is more germane. We apply our results to a model of social network formation based on ternary (three-person) interactions with uniform positive reinforcement. (C) 2003 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_spa_2003_08_003.pdf 245KB PDF download
  文献评价指标  
  下载次数:5次 浏览次数:0次