Improving alignment of dialogue agents via targeted human judgements

We present Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. We use reinforcement learning from human feedback to train our models with two new additions to help human raters judge agent behaviour. First, t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Glaese, Amelia, McAleese, Nat, Trębacz, Maja, Aslanides, John, Firoiu, Vlad, Ewalds, Timo, Rauh, Maribeth, Weidinger, Laura, Chadwick, Martin, Thacker, Phoebe, Campbell-Gillingham, Lucy, Uesato, Jonathan, Huang, Po-Sen, Comanescu, Ramona, Yang, Fan, See, Abigail, Dathathri, Sumanth, Greig, Rory, Chen, Charlie, Fritz, Doug, Elias, Jaume Sanchez, Green, Richard, Mokrá, Soňa, Fernando, Nicholas, Wu, Boxi, Foley, Rachel, Young, Susannah, Gabriel, Iason, Isaac, William, Mellor, John, Hassabis, Demis, Kavukcuoglu, Koray, Hendricks, Lisa Anne, Irving, Geoffrey
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. We use reinforcement learning from human feedback to train our models with two new additions to help human raters judge agent behaviour. First, to make our agent more helpful and harmless, we break down the requirements for good dialogue into natural language rules the agent should follow, and ask raters about each rule separately. We demonstrate that this breakdown enables us to collect more targeted human judgements of agent behaviour and allows for more efficient rule-conditional reward models. Second, our agent provides evidence from sources supporting factual claims when collecting preference judgements over model statements. For factual questions, evidence provided by Sparrow supports the sampled response 78% of the time. Sparrow is preferred more often than baselines while being more resilient to adversarial probing by humans, violating our rules only 8% of the time when probed. Finally, we conduct extensive analyses showing that though our model learns to follow our rules it can exhibit distributional biases.
DOI:10.48550/arxiv.2209.14375