International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences | |
EXTRACTION OF BUILT-UP AREAS USING CONVOLUTIONAL NEURAL NETWORKS AND TRANSFER LEARNING FROM SENTINEL-2 SATELLITE IMAGES | |
Bramhe, V. S.^11  | |
[1] Geomatics Engineering Group, Civil Engineering Department, IIT Roorkee, 247667,India^1 | |
关键词: Built-up Area Extraction; Convolutional Neural Networks; Deep Learning; Sentinel-2 Images; Transfer Learning; | |
DOI : 10.5194/isprs-archives-XLII-3-79-2018 | |
学科分类:地球科学(综合) | |
来源: Copernicus Publications | |
【 摘 要 】
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust andis a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent moreaspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO201911041715802ZK.pdf | 1285KB | download |