LogFormer: A Pre-train and Tuning Pipeline for Log Anomaly Detection
Log anomaly detection is a key component in the field of artificial intelligence for IT operations (AIOps). Considering log data of variant domains, retraining the whole network for unknown domains is inefficient in real industrial scenarios. However, previous deep models merely focused on extractin...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Log anomaly detection is a key component in the field of artificial
intelligence for IT operations (AIOps). Considering log data of variant
domains, retraining the whole network for unknown domains is inefficient in
real industrial scenarios. However, previous deep models merely focused on
extracting the semantics of log sequences in the same domain, leading to poor
generalization on multi-domain logs. To alleviate this issue, we propose a
unified Transformer-based framework for Log anomaly detection (LogFormer) to
improve the generalization ability across different domains, where we establish
a two-stage process including the pre-training and adapter-based tuning stage.
Specifically, our model is first pre-trained on the source domain to obtain
shared semantic knowledge of log data. Then, we transfer such knowledge to the
target domain via shared parameters. Besides, the Log-Attention module is
proposed to supplement the information ignored by the log-paring. The proposed
method is evaluated on three public and one real-world datasets. Experimental
results on multiple benchmarks demonstrate the effectiveness of our LogFormer
with fewer trainable parameters and lower training costs. |
---|---|
DOI: | 10.48550/arxiv.2401.04749 |