kNN Classification of Malware Data Dependency Graph Features
Explainability in classification results are dependent upon the features used for classification. Data dependency graph features representing data movement are directly correlated with operational semantics, and subject to fine grained analysis. This study obtains accurate classification from the us...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Explainability in classification results are dependent upon the features used
for classification. Data dependency graph features representing data movement
are directly correlated with operational semantics, and subject to fine grained
analysis. This study obtains accurate classification from the use of features
tied to structure and semantics. By training an accurate model using labeled
data, this feature representation of semantics is shown to be correlated with
ground truth labels. This was performed using non-parametric learning with a
novel feature representation on a large scale dataset, the Kaggle 2015 Malware
dataset. The features used enable fine grained analysis, increase in
resolution, and explainable inferences. This allows for the body of the term
frequency distribution to be further analyzed and to provide an increase in
feature resolution over term frequency features. This method obtains high
accuracy from analysis of a single instruction, a method that can be repeated
for additional instructions to obtain further increases in accuracy. This study
evaluates the hypothesis that the semantic representation and analysis of
structure are able to make accurate predications and are also correlated to
ground truth labels. Additionally, similarity in the metric space can be
calculated directly without prior training. Our results provide evidence that
data dependency graphs accurately capture both semantic and structural
information for increased explainability in classification results. |
---|---|
DOI: | 10.48550/arxiv.2406.02654 |