期刊论文详细信息
NEUROCOMPUTING 卷:415
Unsupervised domain adaptation with self-attention for post-disaster building damage detection
Article
Li, Yundong1  Lin, Chen1  Li, Hongguang2,3  Hu, Wei1  Dong, Han1  Liu, Yi1 
[1] North China Univ Technol, Sch Informat Sci & Technol, Beijing, Peoples R China
[2] Beihang Univ, Unmanned Syst Res Inst, Beijing, Peoples R China
[3] Chinese Acad Sci, Shenzhen Inst Adv Technol, Guangdong Prov Key Lab Comp Vis & Virtual Real Te, Shenzhen, Peoples R China
关键词: Unsupervised domain adaptation;    Self-attention;    Hurricane damage;    Damage assessment;   
DOI  :  10.1016/j.neucom.2020.07.005
来源: Elsevier
PDF
【 摘 要 】

Fast assessment of damaged buildings is important for post-disaster rescue operations. Building damage detection leveraging image processing and machine learning techniques has become a popular research focus in recent years. Although supervised learning approaches have made considerable improvement for damaged building assessment, rapidly deploying supervised classification is still difficult due to the complexity in obtaining a large number of labeled samples in the aftermath of disasters. We propose an unsupervised self-attention domain adaptation (USADA) model, which transforms instances of the source domain to those of the target domain in pixel space, to address the aforementioned issue. The proposed USADA consists of three parts: a set of generative adversarial networks (GANs), a classifier, and a self-attention module. The GAN adapts source domain images to ensure their similarity to target domain images. Once adapted, the classifier can be trained using the adapted images along with the original images of the source domain to classify damaged buildings. The self-attention module is introduced to maintain the foreground of the generated images conditioned on source domain images for generating plausible samples. As a case study, aerial images of Hurricane Sandy, Maria, and Irma, are used as the source and target domain datasets in our experiments. Experimental results demonstrate that classification accuracies of 68.1% and 84.1% are achieved, and our method obtains improvements of 2.0% and 3.6% against pixel-level domain adaptation, which is the basis of our model. (C) 2020 The Authors. Published by Elsevier B.V.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2020_07_005.pdf 3188KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次