Towards Comprehensive Preference Data Collection for Reward Modeling

Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hu, Yulan, Li, Qingyang, Ouyang, Sheng, Chen, Ge, Chen, Kaihui, Mei, Lijun, Ye, Xucheng, Zhang, Fuzheng, Liu, Yong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!