JARVIS-Leaderboard: A Large Scale Benchmark of Materials Design Methods

Lack of rigorous reproducibility and validation are major hurdles for scientific development across many fields. Materials science in particular encompasses a variety of experimental and theoretical approaches that require careful benchmarking. Leaderboard efforts have been developed previously to m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Choudhary, Kamal, Wines, Daniel, Li, Kangming, Garrity, Kevin F, Gupta, Vishu, Romero, Aldo H, Krogel, Jaron T, Saritas, Kayahan, Fuhr, Addis, Ganesh, Panchapakesan, Kent, Paul R. C, Yan, Keqiang, Lin, Yuchao, Ji, Shuiwang, Blaiszik, Ben, Reiser, Patrick, Friederich, Pascal, Agrawal, Ankit, Tiwary, Pratyush, Beyerle, Eric, Minch, Peter, Rhone, Trevor David, Takeuchi, Ichiro, Wexler, Robert B, Mannodi-Kanakkithodi, Arun, Ertekin, Elif, Mishra, Avanish, Mathew, Nithin, Baird, Sterling G, Wood, Mitchell, Rohskopf, Andrew Dale, Hattrick-Simpers, Jason, Wang, Shih-Han, Achenie, Luke E. K, Xin, Hongliang, Williams, Maureen, Biacchi, Adam J, Tavazza, Francesca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Lack of rigorous reproducibility and validation are major hurdles for scientific development across many fields. Materials science in particular encompasses a variety of experimental and theoretical approaches that require careful benchmarking. Leaderboard efforts have been developed previously to mitigate these issues. However, a comprehensive comparison and benchmarking on an integrated platform with multiple data modalities with both perfect and defect materials data is still lacking. This work introduces JARVIS-Leaderboard, an open-source and community-driven platform that facilitates benchmarking and enhances reproducibility. The platform allows users to set up benchmarks with custom tasks and enables contributions in the form of dataset, code, and meta-data submissions. We cover the following materials design categories: Artificial Intelligence (AI), Electronic Structure (ES), Force-fields (FF), Quantum Computation (QC) and Experiments (EXP). For AI, we cover several types of input data, including atomic structures, atomistic images, spectra, and text. For ES, we consider multiple ES approaches, software packages, pseudopotentials, materials, and properties, comparing results to experiment. For FF, we compare multiple approaches for material property predictions. For QC, we benchmark Hamiltonian simulations using various quantum algorithms and circuits. Finally, for experiments, we use the inter-laboratory approach to establish benchmarks. There are 1281 contributions to 274 benchmarks using 152 methods with more than 8 million data-points, and the leaderboard is continuously expanding. The JARVIS-Leaderboard is available at the website: https://pages.nist.gov/jarvis_leaderboard
DOI:10.48550/arxiv.2306.11688