| Mathematics | |
| A Multi-Depot Vehicle Routing Problem with Stochastic Road Capacity and Reduced Two-Stage Stochastic Integer Linear Programming Models for Rollout Algorithm | |
| Wadi Khalid Anuar1  Hsin-Vonn Seow2  Stefan Pickl3  Lai Soon Lee4  | |
| [1] Department of Logistics and Transportation, School of Technology Management and Logistics, Universiti Utara Malaysia, Sintok 06010, Kedah, Malaysia;Faculty of Arts and Social Sciences, Nottingham University Business School, University of Nottingham Malaysia Campus, Jalan Broga, Semenyih 43500, Selangor, Malaysia;Fakultät für Informatik, Universität der Bundeswehr München, 85577 Neubiberg, Germany;Laboratory of Computational Statistics and Operations Research, Institute for Mathematical Research, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia; | |
| 关键词: reinforcement learning; approximate dynamic programming; rollout algorithm; matheuristic; two-stage stochastic programming; vehicle routing problem; | |
| DOI : 10.3390/math9131572 | |
| 来源: DOAJ | |
【 摘 要 】
A matheuristic approach based on a reduced two-stage Stochastic Integer Linear Programming (SILP) model is presented. The proposed approach is suitable for obtaining a policy constructed dynamically on the go during the rollout algorithm. The rollout algorithm is part of the Approximate Dynamic Programming (ADP) lookahead solution approach for a Markov Decision Processes (MDP) framed Multi-Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity (MDDVRPSRC). First, a Deterministic Multi-Depot VRP with Road Capacity (D-MDVRPRC) is presented. Then an extension, MDVRPSRC-2S, is presented as an offline two-stage SILP model of the MDDVRPSRC. These models are validated using small simulated instances with CPLEX. Next, two reduced versions of the MDVRPSRC-2S model (MDVRPSRC-2S1 and MDVRPSRC-2S2) are derived. They have a specific task in routing: replenishment and delivering supplies. These reduced models are to be utilised interchangeably depending on the capacity of the vehicle, and repeatedly during the execution of rollout in reinforcement learning. As a result, it is shown that a base policy consisting of an exact optimal decision at each decision epoch can be obtained constructively through these reduced two-stage stochastic integer linear programming models. The results obtained from the resulting rollout policy with CPLEX execution during rollout are also presented to validate the reduced model and the matheuristic algorithm. This approach is proposed as a simple implementation when performing rollout for the lookahead approach in ADP.
【 授权许可】
Unknown