| IEEE Access | |
| Spatio-Temporal Self-Attention Network for Fire Detection and Segmentation in Video Surveillance | |
| Kai-Lung Hua1  Mohammad Shahid1  John Jethro Virtusio1  Yu-Hsien Wu1  Yung-Yao Chen2  M. Tanveer3  Khan Muhammad4  | |
| [1] Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan;Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan;Discipline of Mathematics, IIT Indore, Indore, India;Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, Republic of Korea; | |
| 关键词: Fire detection; early detection; disaster management; small-sized fire; video fire segmentation; semi-supervised; | |
| DOI : 10.1109/ACCESS.2021.3132787 | |
| 来源: DOAJ | |
【 摘 要 】
Convolutional Neural Networks (CNNs) based approaches are popular for various image/video related tasks due to their state-of-the-art performance. However, for problems like object detection and segmentation, CNNs still suffer from objects with arbitrary shapes, sizes, occlusions, and varying viewpoints. This problem makes it mostly unsuitable for fire detection and segmentation since flames can have an unpredictable scale and shape. In this paper, we propose a method that detects and segments fire-regions with special considerations of their arbitrary sizes and shapes. Specifically, our approach uses a self-attention mechanism to augment spatial characteristics with temporal features, allowing the network to reduce its reliance on spatial factors like shape or size and take advantage of robust spatial-temporal dependencies. As a whole, our pipeline has two stages: In the first stage, we take out region proposals using Spatial-Temporal features, and in the second stage, we classify whether each region proposal is flame or not. Due to the scarcity of generous fire datasets, we adopt a transfer learning strategy to pre-train our classifier with the ImageNet dataset. Additionally, our Spatial-Temporal Network only requires semi-supervision, where it only needs one ground-truth segmentation mask per frame-sequence input. The experimental results of our proposed method significantly outperform the state-of-the-art fire detection with a 2 ~ 4% relative enhancement in F1-score for large scale fires and a nearly ~ 60% relative improvement for small fires at a very early stage.
【 授权许可】
Unknown