期刊论文详细信息
Mathematics
DSTnet: Deformable Spatio-Temporal Convolutional Residual Network for Video Super-Resolution
Anusha Khan1  Zulfiqar Habib1  Allah Bux Sargano1 
[1] Department of Computer Science, COMSATS University Islamabad, Lahore 54000, Pakistan;
关键词: video super-resolution;    deformable convolution;    3D convolution;    spatio-temporal;    residual neural network;    deep learning;   
DOI  :  10.3390/math9222873
来源: DOAJ
【 摘 要 】

Video super-resolution (VSR) aims at generating high-resolution (HR) video frames with plausible and temporally consistent details using their low-resolution (LR) counterparts, and neighboring frames. The key challenge for VSR lies in the effective exploitation of intra-frame spatial relation and temporal dependency between consecutive frames. Many existing techniques utilize spatial and temporal information separately and compensate motion via alignment. These methods cannot fully exploit the spatio-temporal information that significantly affects the quality of resultant HR videos. In this work, a novel deformable spatio-temporal convolutional residual network (DSTnet) is proposed to overcome the issues of separate motion estimation and compensation methods for VSR. The proposed framework consists of 3D convolutional residual blocks decomposed into spatial and temporal (2+1) D streams. This decomposition can simultaneously utilize input video’s spatial and temporal features without a separate motion estimation and compensation module. Furthermore, the deformable convolution layers have been used in the proposed model that enhances its motion-awareness capability. Our contribution is twofold; firstly, the proposed approach can overcome the challenges in modeling complex motions by efficiently using spatio-temporal information. Secondly, the proposed model has fewer parameters to learn than state-of-the-art methods, making it a computationally lean and efficient framework for VSR. Experiments are conducted on a benchmark Vid4 dataset to evaluate the efficacy of the proposed approach. The results demonstrate that the proposed approach achieves superior quantitative and qualitative performance compared to the state-of-the-art methods.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次