On the dissection of degenerate cosmologies with machine learning

Based on the dustgrain-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Monthly notices of the Royal Astronomical Society 2019-07, Vol.487 (1), p.104-122
Hauptverfasser: Merten, Julian, Giocoli, Carlo, Baldi, Marco, Meneghetti, Massimo, Peel, Austin, Lalande, Florian, Starck, Jean-Luc, Pettorino, Valeria
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Based on the dustgrain-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts, and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data, we use a convolutional neural network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59 per cent for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76 per cent with no observational degeneracies remaining. Visualizing the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.
ISSN:0035-8711
1365-2966
DOI:10.1093/mnras/stz972