Data mining technology information integration for maximum entropy classification models
If sample data disperses information like classification rules, these regulations can need to be combined and fused. This is usually done by combining either the classification outputs, like the classification ensemble, or by incorporating the collection of classification laws by independently weigh...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | If sample data disperses information like classification rules, these regulations can need to be combined and fused. This is usually done by combining either the classification outputs, like the classification ensemble, or by incorporating the collection of classification laws by independently weighing them. In this paper, we introduce a new way of assembling classifiers with parameters of classifiers. The technique is based on the use for regression analysis distributions in normative input dimensions of generative probabilistic graders and on nonparametric wave function for continuous. We also spread over classification parameters such as Dirichlet or regular Wishart. These populations are classified as populations of high or second order. We prove that two or more classificatory can be used by multiplying the hyper distributions of variables and drawing simple formulas for the assignment. Any research reveals how this modern approach is described. The primary advantage of this fusion approach is the conservation of hyper distribution in the fusion phase. For starters, the fused components can be used in the organized training steps. |
---|---|
ISSN: | 0094-243X 1551-7616 |
DOI: | 10.1063/5.0109764 |