DPA-2: a large atomic model as a multi-task learner
The rapid advancements in artificial intelligence (AI) are catalyzing transformative changes in atomic modeling, simulation, and design. AI-driven potential energy models have demonstrated the capability to conduct large-scale, long-duration simulations with the accuracy of ab initio electronic stru...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid advancements in artificial intelligence (AI) are catalyzing
transformative changes in atomic modeling, simulation, and design. AI-driven
potential energy models have demonstrated the capability to conduct
large-scale, long-duration simulations with the accuracy of ab initio
electronic structure methods. However, the model generation process remains a
bottleneck for large-scale applications. We propose a shift towards a
model-centric ecosystem, wherein a large atomic model (LAM), pre-trained across
multiple disciplines, can be efficiently fine-tuned and distilled for various
downstream tasks, thereby establishing a new framework for molecular modeling.
In this study, we introduce the DPA-2 architecture as a prototype for LAMs.
Pre-trained on a diverse array of chemical and materials systems using a
multi-task approach, DPA-2 demonstrates superior generalization capabilities
across multiple downstream tasks compared to the traditional single-task
pre-training and fine-tuning methodologies. Our approach sets the stage for the
development and broad application of LAMs in molecular and materials simulation
research. |
---|---|
DOI: | 10.48550/arxiv.2312.15492 |