Why Overfitting Is Not (Usually) a Problem in Partial Correlation Networks
Network psychometrics is undergoing a time of methodological reflection. In part, this was spurred by the revelation that ℓ1-regularization does not reduce spurious associations in partial correlation networks. In this work, we address another motivation for the widespread use of regularized estimat...
Gespeichert in:
Veröffentlicht in: | Psychological methods 2022-10, Vol.27 (5), p.822-840 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Network psychometrics is undergoing a time of methodological reflection. In part, this was spurred by the revelation that ℓ1-regularization does not reduce spurious associations in partial correlation networks. In this work, we address another motivation for the widespread use of regularized estimation: the thought that it is needed to mitigate overfitting. We first clarify important aspects of overfitting and the bias-variance tradeoff that are especially relevant for the network literature, where the number of nodes or items in a psychometric scale are not large compared to the number of observations (i.e., a low p/n ratio). This revealed that bias and especially variance are most problematic in p/n ratios rarely encountered. We then introduce a nonregularized method, based on classical hypothesis testing, that fulfills two desiderata: (a) reducing or controlling the false positives rate and (b) quelling concerns of overfitting by providing accurate predictions. These were the primary motivations for initially adopting the graphical lasso (glasso). In several simulation studies, our nonregularized method provided more than competitive predictive performance, and, in many cases, outperformed glasso. It appears to be nonregularized, as opposed to regularized estimation, that best satisfies these desiderata. We then provide insights into using our methodology. Here we discuss the multiple comparisons problem in relation to prediction: stringent alpha levels, resulting in a sparse network, can deteriorate predictive accuracy. We end by emphasizing key advantages of our approach that make it ideal for both inference and prediction in network analysis.
Translational Abstract
It is vital to clearly understand the benefits and limitations of regularized networks as inferences drawn from them may hold methodological and clinical implications. This article addresses a core rationale for the increasing adoption of regularized estimation. Namely, that it reduces overfitting. Accordingly, we elucidate important aspects of overfitting and the bias-variance tradeoff that are especially relevant for network research, where the number of variables is small relative to the number of observations (i.e., a low p/n ratio). We find that bias, and especially variance, are the most problematic aspects for inference in p/n ratios that are rare to psychometric settings. We then introduce a nonregularized method based on classical techniques that fulfill two desiderata: (1) reduci |
---|---|
ISSN: | 1082-989X 1939-1463 |
DOI: | 10.1037/met0000437 |