OPT: Open Pre-trained Transformer Language Models
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through AP...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models, which are often trained for hundreds of thousands of
compute days, have shown remarkable capabilities for zero- and few-shot
learning. Given their computational cost, these models are difficult to
replicate without significant capital. For the few that are available through
APIs, no access is granted to the full model weights, making them difficult to
study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only
pre-trained transformers ranging from 125M to 175B parameters, which we aim to
fully and responsibly share with interested researchers. We show that OPT-175B
is comparable to GPT-3, while requiring only 1/7th the carbon footprint to
develop. We are also releasing our logbook detailing the infrastructure
challenges we faced, along with code for experimenting with all of the released
models. |
---|---|
DOI: | 10.48550/arxiv.2205.01068 |