Comparison of the Chapman–Robson and regression estimators of Z from catch-curve data when non-sampling stochastic error is present

Catch-curve analysis is a common method for estimating the total mortality rate ( Z) from age-frequency data in fisheries research. Methods for determining Z include the Chapman–Robson estimator (CR), and regression-based methods, here called RG (simple regression), R1 (truncate at one regression) a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Fisheries research 2002-12, Vol.59 (1), p.149-159
Hauptverfasser: Dunn, A, Francis, R.I.C.C, Doonan, I.J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Catch-curve analysis is a common method for estimating the total mortality rate ( Z) from age-frequency data in fisheries research. Methods for determining Z include the Chapman–Robson estimator (CR), and regression-based methods, here called RG (simple regression), R1 (truncate at one regression) and R5 (truncate at five regression). This paper investigates the sensitivity of CR and the regression estimators of Z when stochastic error is present in the true mortality rate, recruitment and in ageing. We consider the accuracy and precision of each of these estimators for errors introduced both collectively and individually. CR was found to be more accurate in most of the simulation scenarios, i.e., had a lower root mean squared error (RMSE), and also tended to have lower bias than the regression estimators. While a regression estimator (usually either R1 or RG) occasionally performed better than CR, the level of increased performance was slight. RG, the simplest implementation of the regression estimator, was often the most strongly negatively biased. The performance of all estimators degraded with increasing Z and with increased levels of introduced error. Using sample sizes defined by a coefficient of variation for sampling error of 20%, variation in mortality defined by an annual coefficient of variation of 20%, recruitment variance with standard deviation 0.7 (log scale) with autocorrelated errors, CR was found to have a percent root mean squared error (of the true mortality rate) of 22% when Z=0.2 yr −1 and 28% at Z=0.8 yr −1. Corresponding values were 24 and 37% for R1; 31 and 38% for RG; and 41 and 44% for R5.
ISSN:0165-7836
1872-6763
DOI:10.1016/S0165-7836(01)00407-6