期刊论文详细信息
Remote Sensing
A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking
Michael J. Garay1  Nicholas LaHaye1  Brian D. Bue1  Erik Linstead2  Hesham El-Askary3 
[1] Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91101, USA;Machine Learning and Assistive Technology Lab (MLAT), Chapman University, Orange, CA 92866, USA;Schmid College of Science and Technology, Chapman University, Orange, CA 92866, USA;
关键词: big data applications;    clustering;    computer vision;    restricted Boltzmann machines (RBMs);    unsupervised machine learning;    image segmentation;   
DOI  :  10.3390/rs13122364
来源: DOAJ
【 摘 要 】

In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this paper, we quantitatively validate the methodology against datasets currently being generated and used within the remote sensing community, as well as show the capabilities and benefits of the data fusion methodologies used. The experiments run take the output of our unsupervised fusion and segmentation methodology and map them to various labeled datasets at different levels of global coverage and granularity in order to test our models’ capabilities to represent structure at finer and broader scales, using many different kinds of instrumentation, that can be fused when applicable. In all cases tested, our models show a strong ability to segment the objects within input scenes, use multiple datasets fused together where appropriate to improve results, and, at times, outperform the pre-existing datasets. The success here will allow this methodology to be used within use concrete cases and become the basis for future dynamic object tracking across datasets from various remote sensing instruments.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次