Ground-Roll Attenuation Using a Dual-Filter-Bank Convolutional Neural Network

Ground-roll attenuation is very challenging because of its high amplitudes and overlapping frequency content with desired signals. A particular challenge is to recover weak reflections underneath strong masking ground-roll. We propose a dual-filter bank setup combined with two convolutional neural n...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-11
Hauptverfasser: Zhang, Chao, van der Baan, Mirko
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Ground-roll attenuation is very challenging because of its high amplitudes and overlapping frequency content with desired signals. A particular challenge is to recover weak reflections underneath strong masking ground-roll. We propose a dual-filter bank setup combined with two convolutional neural networks (CNNs) to realize ground-roll attenuation. The rationale for using a dual-filter bank strategy is that it permits using two CNNs with different input kernel sizes and different complexities to recognize and extract broad-scale (low-wavelength) and narrow-scale (high-wavelength) features separately. We also apply a frequency filter to create a preliminary separation between the signal and the noise. In addition, we use a radial trace transform that focuses desired signal to a smaller area, facilitating separation of the reflections and ground-roll and accelerating training. The network training strategy combines synthetic and field data examples, in addition to noise injection to augment the number of available training samples. Tests on synthetic and field datasets show that the proposed strategy achieves superior ground-roll attenuation compared with standard methods, even in the case of data with irregular spatial spacing or ground-roll characteristics not contained in the training data.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2021.3110303