Eight Things to Know about Large Language Models
The widespread public deployment of large language models (LLMs) in recent months has prompted a wave of new attention and engagement from advocates, policymakers, and scholars from many fields. This attention is a timely response to the many urgent questions that this technology raises, but it can...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The widespread public deployment of large language models (LLMs) in recent
months has prompted a wave of new attention and engagement from advocates,
policymakers, and scholars from many fields. This attention is a timely
response to the many urgent questions that this technology raises, but it can
sometimes miss important considerations. This paper surveys the evidence for
eight potentially surprising such points:
1. LLMs predictably get more capable with increasing investment, even without
targeted innovation.
2. Many important LLM behaviors emerge unpredictably as a byproduct of
increasing investment.
3. LLMs often appear to learn and use representations of the outside world.
4. There are no reliable techniques for steering the behavior of LLMs.
5. Experts are not yet able to interpret the inner workings of LLMs.
6. Human performance on a task isn't an upper bound on LLM performance.
7. LLMs need not express the values of their creators nor the values encoded
in web text.
8. Brief interactions with LLMs are often misleading. |
---|---|
DOI: | 10.48550/arxiv.2304.00612 |