Sensors | |
Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features | |
Yao Wang1  Zujun Yu1  Liqiang Zhu1  | |
[1] School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, China; | |
关键词: fully convolutional networks; 3D convolutional networks; foreground detection; background modeling; deep learning; deep neural networks; | |
DOI : 10.3390/s18124269 | |
来源: DOAJ |
【 摘 要 】
Foreground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by training, but most of them do not use temporal feature or use simple hand-craft temporal features. In this paper, we propose a new dual multi-scale 3D fully-convolutional neural network for foreground detection problems. It uses an encoder⁻decoder structure to establish a mapping from image sequences to pixel-wise classification results. We also propose a two-stage training procedure, which trains the encoder and decoder separately to improve the training results. With multi-scale architecture, the network can learning deep and hierarchical multi-scale features in both spatial and temporal domains, which is proved to have good invariance for both spatial and temporal scales. We used the CDnet dataset, which is currently the largest foreground detection dataset, to evaluate our method. The experiment results show that the proposed method achieves state-of-the-art results in most test scenes, comparing to current DNN based methods.
【 授权许可】
Unknown