How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits
Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional...
Gespeichert in:
Veröffentlicht in: | Psychophysiology 2019-12, Vol.56 (12), p.e13451-n/a |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Baseline correction plays an important role in past and current methodological debates in ERP research (e.g., the Tanner vs. Maess debate in the Journal of Neuroscience Methods), serving as a potential alternative to strong high‐pass filtering. However, the very assumptions that underlie traditional baseline also undermine it, implying a reduction in the signal‐to‐noise ratio. In other words, traditional baseline correction is statistically unnecessary and even undesirable. Including the baseline interval as a predictor in a GLM‐based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as special cases. This reduces the amount of variance in the residual error term and thus has the potential to increase statistical power.
We demonstrate how traditional baseline correction necessarily reduces statistical power and propose a new method for performing baseline correction that is less susceptible to topographical biases projected from the baseline window and without additional computational burden inherent in other proposals. This provides a new way to address some of the tradeoffs inherent in the choice of high‐pass filter. |
---|---|
ISSN: | 0048-5772 1469-8986 1540-5958 |
DOI: | 10.1111/psyp.13451 |