The Superalignment of Superhuman Intelligence with Large Language Models
We have witnessed superhuman intelligence thanks to the fast development of large language models and multimodal language models. As the application of such superhuman models becomes more and more popular, a critical question arises here: how can we ensure superhuman models are still safe, reliable...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We have witnessed superhuman intelligence thanks to the fast development of
large language models and multimodal language models. As the application of
such superhuman models becomes more and more popular, a critical question
arises here: how can we ensure superhuman models are still safe, reliable and
aligned well to human values? In this position paper, we discuss the concept of
superalignment from the learning perspective to answer this question by
outlining the learning paradigm shift from large-scale pretraining, supervised
fine-tuning, to alignment training. We define superalignment as designing
effective and efficient alignment algorithms to learn from noisy-labeled data
(point-wise samples or pair-wise preference data) in a scalable way when the
task becomes very complex for human experts to annotate and the model is
stronger than human experts. We highlight some key research problems in
superalignment, namely, weak-to-strong generalization, scalable oversight, and
evaluation. We then present a conceptual framework for superalignment, which
consists of three modules: an attacker which generates adversary queries trying
to expose the weaknesses of a learner model; a learner which will refine itself
by learning from scalable feedbacks generated by a critic model along with
minimal human experts; and a critic which generates critics or explanations for
a given query-response pair, with a target of improving the learner by
criticizing. We discuss some important research problems in each component of
this framework and highlight some interesting research ideas that are closely
related to our proposed framework, for instance, self-alignment, self-play,
self-refinement, and more. Last, we highlight some future research directions
for superalignment, including identification of new emergent risks and
multi-dimensional alignment. |
---|---|
DOI: | 10.48550/arxiv.2412.11145 |