Scaling Laws for Floating Point Quantization Training

Low-precision training is considered an effective strategy for reducing both training and downstream inference costs. Previous scaling laws for precision mainly focus on integer quantization, which pay less attention to the constituents in floating-point quantization and thus cannot well fit the LLM...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sun, Xingwu, Li, Shuaipeng, Xie, Ruobing, Han, Weidong, Wu, Kan, Yang, Zhen, Li, Yixing, Wang, An, Li, Shuai, Xue, Jinbao, Cheng, Yu, Tao, Yangyu, Kang, Zhanhui, Xu, Chengzhong, Wang, Di, Jiang, Jie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!