期刊论文详细信息
IEEE Access
Model-Based Deep Network for Single Image Deraining
Chengdong Wu1  Pengyue Li1  Yandong Tang2  Guolin Wang2  Jiandong Tian2 
[1] Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China;State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China;
关键词: Rain removal;    nonlinear rain model;    channel attention U-DenseNet;    residual dense block;    image restoration;   
DOI  :  10.1109/ACCESS.2020.2965545
来源: DOAJ
【 摘 要 】

For current learning-based single image deraining methods, deraining networks are usually designed based on a simplified linear additive rain model, which may not only cause unreal synthetic rainy images for both training and testing datasets, but also adversely affect the applicability and generality of corresponding networks. In this paper, we use the screen blend model of Photoshop as the nonlinear rainy image decomposition model. Based on this model, we design a novel channel attention U-DenseNet for rain detection and a residual dense block for rain removal. The detection sub-network not only adjusts channel-wise feature responses by our novel channel attention block to pay more attention to learn the rain map, but also combines the context information with the precise localization by the U-DenseNet to promote pixel-wise estimation accuracy. After rain detection, we use the nonlinear model to get a coarse rain-free image, and then introduce a deraining refinement subnetwork consisted of the residual dense block to obtain a fine rain-free image. For training our network, we apply the nonlinear rain model to synthesize a benchmark dataset called as RITD. It contains 3200 triplets of rainy images, rain maps, and clean background images. Our extensive quantitative and qualitative experimental results show that our method outperforms several state-of-the-art methods on both synthetic and real images.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次