Effects of time series imagery on automated classification of coastal wetland environments using projection pursuit methods
Using a time series of Radarsat SAR and Landsat TM imagery, we developed automated classification models for coastal land-cover in Northampton County, VA. The database consisted of three pairs of images from each sensor taken within one week of each other. We compared the results of models based on...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Using a time series of Radarsat SAR and Landsat TM imagery, we developed automated classification models for coastal land-cover in Northampton County, VA. The database consisted of three pairs of images from each sensor taken within one week of each other. We compared the results of models based on data from one set with the results based on data from all three sets for Radarsat only, Landsat only and both sensors combined. For Radarsat only, there is an increase on the order of 23-30% in the overall classification score for the three-date set compared to the one-date set. This is consistent with previous results based on different classification methods. We found a smaller increase, on the order of 1-5%, for the Landsat only. For combined sensor cases, three-date inputs did not improve results when compared with single-date inputs. For the feature extraction stage, we compared a projection pursuit method with principal component analysis. In the final classifier stage, we compared a vector quantization algorithm, LVQ, and the backward propagation of error model with a cross-entropy cost function. The differences given above for one-date versus three-date results hold for all the methods studied. Best model agreement with a National Wetlands Inventory (NWI) map was found for three-date atmospherically corrected Landsat inputs, for which generalization performance was 80.5% of pixels correct, with 86.4% obtained on the cross-validation set used to find a stopping point for model optimization, and 87.9% for the training set. |
---|---|
DOI: | 10.1109/IGARSS.2001.977099 |