U-Net Convolutional Networks for Mining Land Cover Classification Based on High-Resolution UAV Imagery

Mining activities are the leading cause of deforestation, land-use changes, and pollution. Land use/cover mapping in Vietnam every five years is not useful to monitor land covers in mining areas, especially in the Central Highland region. It is necessary to equip managers with a better tool to monit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.186257-186273
Hauptverfasser: Giang, Tuan Linh, Dang, Kinh Bac, Toan Le, Quang, Nguyen, Vu Giang, Tong, Si Son, Pham, Van-Manh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mining activities are the leading cause of deforestation, land-use changes, and pollution. Land use/cover mapping in Vietnam every five years is not useful to monitor land covers in mining areas, especially in the Central Highland region. It is necessary to equip managers with a better tool to monitor and map land cover using high-resolution images. Therefore, the authors proposed using the U-Net convolutional network for land-cover classification based on multispectral Unmanned aerial vehicle (UAV) image in a mining area of Daknong province, Vietnam. An area of 0.5kmx0.8km was used for training and testing seven U-Net models using seven optimizer function types. The final U-Net model can interpret six land cover types: (1) open-case mining lands, (2) old permanent croplands, (3) young permanent croplands, (4) grasslands, (5) bare soils, (6) water bodies. As a result, two models using Nadam and Adadelta optimizer function can be used to classify six land cover types with accuracy higher than 83%, especially in open-case mining lands and polluted streams flowed out from the mining areas. The trained U-Net models can potentially update new land cover types in other mining areas towards monitoring land cover changes in real-time in the future.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3030112