Enhanced astronomical source classification with integration of attention mechanisms and vision transformers

Accurate classification of celestial objects is essential for advancing our understanding of the universe. MargNet is a recently developed deep learning-based classifier applied to the Sloan Digital Sky Survey (SDSS) Data Release 16 (DR16) dataset to segregate stars, quasars, and compact galaxies us...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Astrophysics and space science 2024-08, Vol.369 (8), p.92, Article 92
Hauptverfasser: Bhavanam, Srinadh Reddy, Channappayya, Sumohana S., P. K, Srijith, Desai, Shantanu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Accurate classification of celestial objects is essential for advancing our understanding of the universe. MargNet is a recently developed deep learning-based classifier applied to the Sloan Digital Sky Survey (SDSS) Data Release 16 (DR16) dataset to segregate stars, quasars, and compact galaxies using photometric data. MargNet utilizes a stacked architecture, combining a Convolutional Neural Network (CNN) for image modelling and an Artificial Neural Network (ANN) for modelling photometric parameters. Notably, MargNet focuses exclusively on compact galaxies and outperforms other methods in classifying compact galaxies from stars and quasars, even at fainter magnitudes. In this study, we propose enhancing MargNet’s performance by incorporating attention mechanisms and Vision Transformer (ViT)-based models for processing image data. The attention mechanism allows the model to focus on relevant features and capture intricate patterns within images, effectively distinguishing between different classes of celestial objects. Additionally, we leverage ViTs, a transformer-based deep learning architecture renowned for exceptional performance in image classification tasks. We enhance the model’s understanding of complex astronomical images by utilizing ViT’s ability to capture global dependencies and contextual information. Our approach uses a curated dataset comprising 240,000 compact and 150,000 faint objects. The models learn classification directly from the data, minimizing human intervention. Furthermore, we explore ViT as a hybrid architecture that uses photometric features and images together as input to predict astronomical objects. Our results demonstrate that the proposed attention mechanism augmented CNN in MargNet marginally outperforms the traditional MargNet and the proposed ViT-based MargNet models. Additionally, the ViT-based hybrid model emerges as the most lightweight and easy-to-train model with classification accuracy similar to that of the best-performing attention-enhanced MargNet. This advancement in deep learning will contribute to greater success in identifying objects in upcoming surveys like the Vera C. Rubin Large Synoptic Survey Telescope.
ISSN:0004-640X
1572-946X
DOI:10.1007/s10509-024-04357-9