学位论文详细信息
A leader-follower partially observed Markov game
Risk analysis;Markov decision process;Real-time decision making;Value of information
Chang, Yanling ; Erera, Alan L. White III, Chelsea C. Industrial and Systems Engineering Ayer, Turgay Zhou, Enlu Dieci, Luca ; Erera, Alan L.
University:Georgia Institute of Technology
Department:Industrial and Systems Engineering
关键词: Risk analysis;    Markov decision process;    Real-time decision making;    Value of information;   
Others  :  https://smartech.gatech.edu/bitstream/1853/54407/1/CHANG-DISSERTATION-2015.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

The intent of this dissertation is to generate a set of non-dominated finite-memory policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon total discounted reward, partially observed Markov game (POMG). Each agent’s policy assumes that the agent knows its current and recent state values, its recent actions, and the current and recent possibly inaccurate observations of the other agent’s state. For each candidate finite-memory leader policy, we assume the follower, fully aware of the leader policy, determines a policy that optimizes the follower’s criterion. The leader-follower assumption allows the POMG to be transformed into a specially structured, partially observed Markov decision process that we use to determine the follower’s best response policy for a given leader policy. We then present a value determination procedure to evaluate the performance of the leader for a given leader policy, based on which non-dominated set of leader polices can be selected by existing heuristic approaches. We then analyze how the value of the leader’s criterion changes due to changes in the leader’s quality of observation of the follower. We give conditions that insure improved observation quality will improve the leader’s value function, assuming that changes in the observation quality do not cause the follower to change its policy. We show that discontinuities in the value of the leader’ criterion, as a function of observation quality, can occur when the change of observation quality is significant enough for the follower to change its policy. We present conditions that determine when a discontinuity may occur and conditions that guarantee a discontinuity will not degrade the leader’s performance. This framework has been used to develop a dynamic risk analysis approach for U.S. food supply chains and to compare and create supply chain designs and sequential control strategies for risk mitigation.

【 预 览 】
附件列表
Files Size Format View
A leader-follower partially observed Markov game 3851KB PDF download
  文献评价指标  
  下载次数:11次 浏览次数:9次