Observer Effects and Avian-Call-Count Survey Quality: Rare-Species Biases and Overconfidence

Wildlife monitoring surveys are prone to nondetection errors and false positives. To determine factors that affect the incidence of these errors, we built an Internet-based survey that simulated avian point counts, and measured error rates among volunteer observers. Using similar-sounding vocalizati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Auk 2012-01, Vol.129 (1), p.76-86
Hauptverfasser: Farmer, Robert G, Leonard, Marty L, Horn, Andrew G
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Wildlife monitoring surveys are prone to nondetection errors and false positives. To determine factors that affect the incidence of these errors, we built an Internet-based survey that simulated avian point counts, and measured error rates among volunteer observers. Using similar-sounding vocalizations from paired rare and common bird species, we measured the effects of species rarity and observer skill, and the influence of a reward system that explicitly encouraged the detection of rare species. Higher self-reported skill levels and common species independently predicted fewer nondetections (probability range: 0.11 [experts, common species] to 0.54 [moderates, rare species]). Overall proportions of detections that were false positives increased significantly as skill level declined (range: 0.06 [experts, common species] to 0.22 [moderates, rare species]). Moderately skilled observers were significantly more likely to report false-positive records of common species than of rare species, whereas experts were significantly more likely to report false-positives of rare species than of common species. The reward for correctly detecting rare species did not significantly affect these patterns. Because false positives can also result from observers overestimating their own abilities (“overconfidence”), we lastly tested whether observers' beliefs that they had recorded error-free data (“confidence”) tended to be incorrect (“overconfident”), and whether this pattern varied with skill. Observer confidence increased significantly with observer skill, whereas overconfidence was uniformly high (overall mean proportion = 0.73). Our results emphasize the value of controlling for observer skill in data collection and modeling and do not support the use of opinion-based (i.e., subjective) indications of observer confidence.
ISSN:0004-8038
1938-4254
2732-4613
DOI:10.1525/auk.2012.11129