IEEE Access | 卷:8 |
Generative Adversarial Networks for Stochastic Video Prediction With Action Control | |
Turki Turki1  Jason T. L. Wang2  Zhihang Hu2  | |
[1] Department of Computer Science, King Abdulaziz University, Jeddah, Saudi Arabia; | |
[2] Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA; | |
关键词: Cycle consistency; deep learning; generative adversarial networks; video prediction; | |
DOI : 10.1109/ACCESS.2020.2982750 | |
来源: DOAJ |
【 摘 要 】
The ability of predicting future frames in video sequences, known as video prediction, is an appealing yet challenging task in computer vision. This task requires an in-depth representation of video sequences and a deep understanding of real-word causal rules. Existing approaches for tackling the video prediction problem can be classified into two categories: deterministic and stochastic methods. Deterministic methods lack the ability of generating possible future frames and often yield blurry predictions. On the other hand, although current stochastic approaches can predict possible future frames, their models lack the ability of action control in the sense that they cannot generate the desired future frames conditioned on a specific action. In this paper, we propose new generative adversarial networks (GANs) for stochastic video prediction. Our framework, called VPGAN, employs an adversarial inference model and a cycle-consistency loss function to empower the framework to obtain more accurate predictions. In addition, we incorporate a conformal mapping network structure into VPGAN to enable action control for generating desirable future frames. In this way, VPGAN is able to produce fake videos of an object moving along a specific direction. Experimental results show that the combination of VPGAN with a pre-trained image segmentation model outperforms existing stochastic video prediction methods.
【 授权许可】
Unknown