Is Tokenization Needed for Masked Particle Modelling?
In this work, we significantly enhance masked particle modeling (MPM), a self-supervised learning scheme for constructing highly expressive representations of unordered sets relevant to developing foundation models for high-energy physics. In MPM, a model is trained to recover the missing elements o...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we significantly enhance masked particle modeling (MPM), a
self-supervised learning scheme for constructing highly expressive
representations of unordered sets relevant to developing foundation models for
high-energy physics. In MPM, a model is trained to recover the missing elements
of a set, a learning objective that requires no labels and can be applied
directly to experimental data. We achieve significant performance improvements
over previous work on MPM by addressing inefficiencies in the implementation
and incorporating a more powerful decoder. We compare several pre-training
tasks and introduce new reconstruction methods that utilize conditional
generative models without data tokenization or discretization. We show that
these new methods outperform the tokenized learning objective from the original
MPM on a new test bed for foundation models for jets, which includes using a
wide variety of downstream tasks relevant to jet physics, such as
classification, secondary vertex finding, and track identification. |
---|---|
DOI: | 10.48550/arxiv.2409.12589 |