Is your ad hoc model selection strategy affecting your multimodel inference?

Ecologists routinely fit complex models with multiple parameters of interest, where hundreds or more competing models are plausible. To limit the number of fitted models, ecologists often define a model selection strategy composed of a series of stages in which certain features of a model are compar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Ecosphere (Washington, D.C) D.C), 2020-01, Vol.11 (1), p.n/a
Hauptverfasser: Morin, Dana J., Yackulic, Charles B., Diffendorfer, Jay E., Lesmeister, Damon B., Nielsen, Clayton K., Reid, Janice, Schauber, Eric M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Ecologists routinely fit complex models with multiple parameters of interest, where hundreds or more competing models are plausible. To limit the number of fitted models, ecologists often define a model selection strategy composed of a series of stages in which certain features of a model are compared while other features are held constant. Defining these multi‐stage strategies requires making a series of decisions, which may potentially impact inferences, but have not been critically evaluated. We begin by identifying key features of strategies, introducing descriptive terms when they did not already exist in the literature. Strategies differ in how they define and order model building stages. Sequential‐by‐sub‐model strategies focus on one sub‐model (parameter) at a time with modeling of subsequent sub‐models dependent on the selected sub‐model structures from the previous stages. Secondary candidate set strategies model sub‐models independently and combine the top set of models from each sub‐model for selection in a final stage. Build‐up approaches define stages across sub‐models and increase in complexity at each stage. Strategies also differ in how the top set of models is selected in each stage and whether they use null or more complex sub‐model structures for non‐target sub‐models. We tested the performance of different model selection strategies using four data sets and three model types. For each data set, we determined the "true" distribution of AIC weights by fitting all plausible models. Then, we calculated the number of models that would have been fitted and the portion of "true" AIC weight we recovered under different model selection strategies. Sequential‐by‐sub‐model strategies often performed poorly. Based on our results, we recommend using a build‐up or secondary candidate sets, which were more reliable and carrying all models within 5–10 AIC of the top model forward to subsequent stages. The structure of non‐target sub‐models was less important. Multi‐stage approaches cannot compensate for a lack of critical thought in selecting covariates and building models to represent competing a priori hypotheses. However, even when competing hypotheses for different sub‐models are limited, thousands or more models may be possible so strategies to explore candidate model space reliably and efficiently will be necessary.
ISSN:2150-8925
2150-8925
DOI:10.1002/ecs2.2997