A Study of BFLOAT16 for Deep Learning Training

This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kalamkar, Dhiraj, Mudigere, Dheevatsa, Mellempudi, Naveen, Das, Dipankar, Banerjee, Kunal, Avancha, Sasikanth, Vooturi, Dharma Teja, Jammalamadaka, Nataraj, Huang, Jianyu, Yuen, Hector, Yang, Jiyan, Park, Jongsoo, Heinecke, Alexander, Georganas, Evangelos, Srinivasan, Sudarshan, Kundu, Abhisek, Smelyanskiy, Misha, Kaul, Bharat, Dubey, Pradeep
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kalamkar, Dhiraj
Mudigere, Dheevatsa
Mellempudi, Naveen
Das, Dipankar
Banerjee, Kunal
Avancha, Sasikanth
Vooturi, Dharma Teja
Jammalamadaka, Nataraj
Huang, Jianyu
Yuen, Hector
Yang, Jiyan
Park, Jongsoo
Heinecke, Alexander
Georganas, Evangelos
Srinivasan, Sudarshan
Kundu, Abhisek
Smelyanskiy, Misha
Kaul, Bharat
Dubey, Pradeep
description This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.
doi_str_mv 10.48550/arxiv.1905.12322
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_12322</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_12322</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1152-771a905e091a253aa8dc8007adbb3f121f4146b83c351234be314193d458f2623</originalsourceid><addsrcrecordid>eNotzrkOwjAQBFA3FAj4ACr8Awler504ZbiRIlEQ6miT2CgSBGQOwd9zVjPVzGNsCCJURmsxJv9o7iEkQocgUcouC1O-vd7qJz85PllkmzSHiLuT5zNrzzyz5Num3fPcU_MpfdZxdLjYwT97bLeY59NVkG2W62maBQSgZRDHQO8TKxIgqZHI1JURIqa6LNGBBKdARaXBCvUbokqLoCDBWmnjZCSxx0a_3a-4OPvmSP5ZfOTFV44vMlA6Iw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Study of BFLOAT16 for Deep Learning Training</title><source>arXiv.org</source><creator>Kalamkar, Dhiraj ; Mudigere, Dheevatsa ; Mellempudi, Naveen ; Das, Dipankar ; Banerjee, Kunal ; Avancha, Sasikanth ; Vooturi, Dharma Teja ; Jammalamadaka, Nataraj ; Huang, Jianyu ; Yuen, Hector ; Yang, Jiyan ; Park, Jongsoo ; Heinecke, Alexander ; Georganas, Evangelos ; Srinivasan, Sudarshan ; Kundu, Abhisek ; Smelyanskiy, Misha ; Kaul, Bharat ; Dubey, Pradeep</creator><creatorcontrib>Kalamkar, Dhiraj ; Mudigere, Dheevatsa ; Mellempudi, Naveen ; Das, Dipankar ; Banerjee, Kunal ; Avancha, Sasikanth ; Vooturi, Dharma Teja ; Jammalamadaka, Nataraj ; Huang, Jianyu ; Yuen, Hector ; Yang, Jiyan ; Park, Jongsoo ; Heinecke, Alexander ; Georganas, Evangelos ; Srinivasan, Sudarshan ; Kundu, Abhisek ; Smelyanskiy, Misha ; Kaul, Bharat ; Dubey, Pradeep</creatorcontrib><description>This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.</description><identifier>DOI: 10.48550/arxiv.1905.12322</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1152-771a905e091a253aa8dc8007adbb3f121f4146b83c351234be314193d458f2623</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.12322$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.12322$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kalamkar, Dhiraj</creatorcontrib><creatorcontrib>Mudigere, Dheevatsa</creatorcontrib><creatorcontrib>Mellempudi, Naveen</creatorcontrib><creatorcontrib>Das, Dipankar</creatorcontrib><creatorcontrib>Banerjee, Kunal</creatorcontrib><creatorcontrib>Avancha, Sasikanth</creatorcontrib><creatorcontrib>Vooturi, Dharma Teja</creatorcontrib><creatorcontrib>Jammalamadaka, Nataraj</creatorcontrib><creatorcontrib>Huang, Jianyu</creatorcontrib><creatorcontrib>Yuen, Hector</creatorcontrib><creatorcontrib>Yang, Jiyan</creatorcontrib><creatorcontrib>Park, Jongsoo</creatorcontrib><creatorcontrib>Heinecke, Alexander</creatorcontrib><creatorcontrib>Georganas, Evangelos</creatorcontrib><creatorcontrib>Srinivasan, Sudarshan</creatorcontrib><creatorcontrib>Kundu, Abhisek</creatorcontrib><creatorcontrib>Smelyanskiy, Misha</creatorcontrib><creatorcontrib>Kaul, Bharat</creatorcontrib><creatorcontrib>Dubey, Pradeep</creatorcontrib><title>A Study of BFLOAT16 for Deep Learning Training</title><description>This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrkOwjAQBFA3FAj4ACr8Awler504ZbiRIlEQ6miT2CgSBGQOwd9zVjPVzGNsCCJURmsxJv9o7iEkQocgUcouC1O-vd7qJz85PllkmzSHiLuT5zNrzzyz5Num3fPcU_MpfdZxdLjYwT97bLeY59NVkG2W62maBQSgZRDHQO8TKxIgqZHI1JURIqa6LNGBBKdARaXBCvUbokqLoCDBWmnjZCSxx0a_3a-4OPvmSP5ZfOTFV44vMlA6Iw</recordid><startdate>20190529</startdate><enddate>20190529</enddate><creator>Kalamkar, Dhiraj</creator><creator>Mudigere, Dheevatsa</creator><creator>Mellempudi, Naveen</creator><creator>Das, Dipankar</creator><creator>Banerjee, Kunal</creator><creator>Avancha, Sasikanth</creator><creator>Vooturi, Dharma Teja</creator><creator>Jammalamadaka, Nataraj</creator><creator>Huang, Jianyu</creator><creator>Yuen, Hector</creator><creator>Yang, Jiyan</creator><creator>Park, Jongsoo</creator><creator>Heinecke, Alexander</creator><creator>Georganas, Evangelos</creator><creator>Srinivasan, Sudarshan</creator><creator>Kundu, Abhisek</creator><creator>Smelyanskiy, Misha</creator><creator>Kaul, Bharat</creator><creator>Dubey, Pradeep</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190529</creationdate><title>A Study of BFLOAT16 for Deep Learning Training</title><author>Kalamkar, Dhiraj ; Mudigere, Dheevatsa ; Mellempudi, Naveen ; Das, Dipankar ; Banerjee, Kunal ; Avancha, Sasikanth ; Vooturi, Dharma Teja ; Jammalamadaka, Nataraj ; Huang, Jianyu ; Yuen, Hector ; Yang, Jiyan ; Park, Jongsoo ; Heinecke, Alexander ; Georganas, Evangelos ; Srinivasan, Sudarshan ; Kundu, Abhisek ; Smelyanskiy, Misha ; Kaul, Bharat ; Dubey, Pradeep</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1152-771a905e091a253aa8dc8007adbb3f121f4146b83c351234be314193d458f2623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kalamkar, Dhiraj</creatorcontrib><creatorcontrib>Mudigere, Dheevatsa</creatorcontrib><creatorcontrib>Mellempudi, Naveen</creatorcontrib><creatorcontrib>Das, Dipankar</creatorcontrib><creatorcontrib>Banerjee, Kunal</creatorcontrib><creatorcontrib>Avancha, Sasikanth</creatorcontrib><creatorcontrib>Vooturi, Dharma Teja</creatorcontrib><creatorcontrib>Jammalamadaka, Nataraj</creatorcontrib><creatorcontrib>Huang, Jianyu</creatorcontrib><creatorcontrib>Yuen, Hector</creatorcontrib><creatorcontrib>Yang, Jiyan</creatorcontrib><creatorcontrib>Park, Jongsoo</creatorcontrib><creatorcontrib>Heinecke, Alexander</creatorcontrib><creatorcontrib>Georganas, Evangelos</creatorcontrib><creatorcontrib>Srinivasan, Sudarshan</creatorcontrib><creatorcontrib>Kundu, Abhisek</creatorcontrib><creatorcontrib>Smelyanskiy, Misha</creatorcontrib><creatorcontrib>Kaul, Bharat</creatorcontrib><creatorcontrib>Dubey, Pradeep</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kalamkar, Dhiraj</au><au>Mudigere, Dheevatsa</au><au>Mellempudi, Naveen</au><au>Das, Dipankar</au><au>Banerjee, Kunal</au><au>Avancha, Sasikanth</au><au>Vooturi, Dharma Teja</au><au>Jammalamadaka, Nataraj</au><au>Huang, Jianyu</au><au>Yuen, Hector</au><au>Yang, Jiyan</au><au>Park, Jongsoo</au><au>Heinecke, Alexander</au><au>Georganas, Evangelos</au><au>Srinivasan, Sudarshan</au><au>Kundu, Abhisek</au><au>Smelyanskiy, Misha</au><au>Kaul, Bharat</au><au>Dubey, Pradeep</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Study of BFLOAT16 for Deep Learning Training</atitle><date>2019-05-29</date><risdate>2019</risdate><abstract>This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.</abstract><doi>10.48550/arxiv.1905.12322</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1905.12322
ispartof
issn
language eng
recordid cdi_arxiv_primary_1905_12322
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title A Study of BFLOAT16 for Deep Learning Training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T17%3A26%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Study%20of%20BFLOAT16%20for%20Deep%20Learning%20Training&rft.au=Kalamkar,%20Dhiraj&rft.date=2019-05-29&rft_id=info:doi/10.48550/arxiv.1905.12322&rft_dat=%3Carxiv_GOX%3E1905_12322%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true