PeerJ | |
Fusion neural networks for plant classification: learning to combine RGB, hyperspectral, and lidar data | |
article | |
Victoria M. Scholl1  Joseph McGlinchy1  Teo Price-Broncucia3  Jennifer K. Balch1  Maxwell B. Joseph1  | |
[1] Earth Lab, Cooperative Institute for Research in Environmental Science, University of Colorado at Boulder;Department of Geography, University of Colorado at Boulder;Department of Computer Science, University of Colorado at Boulder | |
关键词: Machine learning; Deep learning; Species classification; Remote sensing; Airborne remote sensing; National Ecological Observatory Network; Data science competition; Neural networks; Open science; | |
DOI : 10.7717/peerj.11790 | |
学科分类:社会科学、人文和艺术(综合) | |
来源: Inra | |
【 摘 要 】
Airborne remote sensing offers unprecedented opportunities to efficiently monitor vegetation, but methods to delineate and classify individual plant species using the collected data are still actively being developed and improved. The Integrating Data science with Trees and Remote Sensing (IDTReeS) plant identification competition openly invited scientists to create and compare individual tree mapping methods. Participants were tasked with training taxon identification algorithms based on two sites, to then transfer their methods to a third unseen site, using field-based plant observations in combination with airborne remote sensing image data products from the National Ecological Observatory Network (NEON). These data were captured by a high resolution digital camera sensitive to red, green, blue (RGB) light, hyperspectral imaging spectrometer spanning the visible to shortwave infrared wavelengths, and lidar systems to capture the spectral and structural properties of vegetation. As participants in the IDTReeS competition, we developed a two-stage deep learning approach to integrate NEON remote sensing data from all three sensors and classify individual plant species and genera. The first stage was a convolutional neural network that generates taxon probabilities from RGB images, and the second stage was a fusion neural network that “learns” how to combine these probabilities with hyperspectral and lidar data. Our two-stage approach leverages the ability of neural networks to flexibly and automatically extract descriptive features from complex image data with high dimensionality. Our method achieved an overall classification accuracy of 0.51 based on the training set, and 0.32 based on the test set which contained data from an unseen site with unknown taxa classes. Although transferability of classification algorithms to unseen sites with unknown species and genus classes proved to be a challenging task, developing methods with openly available NEON data that will be collected in a standardized format for 30 years allows for continual improvements and major gains for members of the computational ecology community. We outline promising directions related to data preparation and processing techniques for further investigation, and provide our code to contribute to open reproducible science efforts.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202307100005576ZK.pdf | 21253KB | download |