Combination of GNN and MNL: a new model for dealing with multi-classification tasks
In daily life, tasks such as choosing a mode of transportation in traffic, diagnosing diseases and selecting medications in healthcare, as well as recommending products in e-commerce, can all be fundamentally classified as multi-classification tasks. Currently, effective approaches to solving multi-...
Gespeichert in:
Veröffentlicht in: | Applied mathematics and nonlinear sciences 2024-01, Vol.9 (1) |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In daily life, tasks such as choosing a mode of transportation in traffic, diagnosing diseases and selecting medications in healthcare, as well as recommending products in e-commerce, can all be fundamentally classified as multi-classification tasks. Currently, effective approaches to solving multi-classification tasks include the behavior modeling-based Integrated Choice and Latent Variable (ICLV) model and the machine learning-based Multinomial Logit Model (MNL). The former slightly outperforms the latter in multi-classification tasks due to its ability to identify latent variables and integrate the selection process. However, if certain shortcomings of the MNL model, such as the assumption of independence from irrelevant alternatives, linearity assumption, and lack of hierarchical structure, can be addressed, MNL could outperform the ICLV model in some datasets. Graph Neural Networks (GNNs), which treat the entire feature set as a graph and consider the relationships between features, break the linearity assumption and offer a more flexible and hierarchical structure. This indicates that GNNs are able to effectively alleviate the limitations of MNL. Therefore, we propose an innovative GNN MNL composite model: first, GNN is employed to efficiently extract features from the dataset, and then the extracted features are used as input to train the MNL model. Finally, the trained MNL is utilized to classify new samples. The model’s accuracy was enhanced by incorporating Generative Adversarial Networks (GANs) for data augmentation during the training process. Through validation on three datasets, including
, we demonstrated that the
composite model indeed achieves higher accuracy, confirming its feasibility. Future research could explore the generalizability of the GNN MNL model in other classification domains. |
---|---|
ISSN: | 2444-8656 2444-8656 |
DOI: | 10.2478/amns-2024-3548 |