Decision-making under uncertainty: beyond probabilities Challenges and perspectives | |
Article; Early Access | |
关键词: VALUE-ITERATION; ROBUST-CONTROL; DISCRETE-TIME; SYSTEMS; APPROXIMATION; OPTIMIZATION; VERIFICATION; SEARCH; | |
DOI : 10.1007/s10009-023-00704-3 | |
来源: SCIE |
【 摘 要 】
This position paper reflects on the state-of-the-art in decision-making under uncertainty. A classical assumption is that probabilities can sufficiently capture all uncertainty in a system. In this paper, the focus is on the uncertainty that goes beyond this classical interpretation, particularly by employing a clear distinction between aleatoric and epistemic uncertainty. The paper features an overview of Markov decision processes (MDPs) and extensions to account for partial observability and adversarial behavior. These models sufficiently capture aleatoric uncertainty, but fail to account for epistemic uncertainty robustly. Consequently, we present a thorough overview of so-called uncertainty models that exhibit uncertainty in a more robust interpretation. We show several solution techniques for both discrete and continuous models, ranging from formal verification, over control-based abstractions, to reinforcement learning. As an integral part of this paper, we list and discuss several key challenges that arise when dealing with rich types of uncertainty in a model-based fashion.
【 授权许可】
Free