学位论文详细信息
Quantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observations
information sharing;supply chain;successive approximation;decentralized Markov desicion process with restric;partially observable Markov decision process;perturbation;Markov decision process;transfer price negotiation;inventory policy
Wei, Wenbin ; Jacqueline Hughes-Oliver, Committee Member,Henry Nuttle, Committee Member,Thom Hodgson, Committee Co-Chair,Russell King, Committee Co-Chair,Wei, Wenbin ; Jacqueline Hughes-Oliver ; Committee Member ; Henry Nuttle ; Committee Member ; Thom Hodgson ; Committee Co-Chair ; Russell King ; Committee Co-Chair
University:North Carolina State University
关键词: information sharing;    supply chain;    successive approximation;    decentralized Markov desicion process with restric;    partially observable Markov decision process;    perturbation;    Markov decision process;    transfer price negotiation;    inventory policy;   
Others  :  https://repository.lib.ncsu.edu/bitstream/handle/1840.16/4189/etd.pdf?sequence=1&isAllowed=y
美国|英语
来源: null
PDF
【 摘 要 】
Information sharing in a two-stage and three-stage supply chain is studied.Assuming the customer demand distribution is known along the supply chain, the information to be shared is the inventory level of each supply chain member.In order to study the value of shared information, the supply chain is examined under different information sharing schemes.A Markov decision process (MDP) approach is used to model the supply chain, and the optimal policy given each scheme is determined.By comparing these schemes, the value of shared information can be quantified.Since the optimal policy maximizes the total profit within a supply chain, allocation of the profit among supply chain members, or transfer cost/price negotiation, is also discussed.The information sharing schemes include full information sharing, partial information sharing and no information sharing.In the case of full information sharing, the supply chain problem is modeled as a single agent Markov decision process with complete observations (a traditional MDP) which can be solved based on the policy iteration method of Howard (1960).In the case of partial information sharing or no information sharing, the supply chain problem is modeled as a decentralized Markov decision process with restricted observations (DEC-ROMDP).Each agent may have complete observation of the process, or may have only restricted observation of the process.In order to solve the DEC-ROMDP, an evolutionary coordination algorithm is introduced, which proves to be effective if coupled with policy perturbation and multiple start strategies.
【 预 览 】
附件列表
Files Size Format View
Quantifying Shared Information Value in a Supply Chain Using Decentralized Markov Decision Processes with Restricted Observations 679KB PDF download
  文献评价指标  
  下载次数:9次 浏览次数:30次