Efficient and Mathematically Robust Operations for Certified Neural Networks Inference
6th Workshop on Accelerated Machine Learning (AccML) at HiPEAC 2024 In recent years, machine learning (ML) and neural networks (NNs) have gained widespread use and attention across various domains, particularly in transportation for achieving autonomy, including the emergence of flying taxis for urb...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 6th Workshop on Accelerated Machine Learning (AccML) at HiPEAC
2024 In recent years, machine learning (ML) and neural networks (NNs) have gained
widespread use and attention across various domains, particularly in
transportation for achieving autonomy, including the emergence of flying taxis
for urban air mobility (UAM). However, concerns about certification have come
up, compelling the development of standardized processes encompassing the
entire ML and NN pipeline. This paper delves into the inference stage and the
requisite hardware, highlighting the challenges associated with IEEE 754
floating-point arithmetic and proposing alternative number representations. By
evaluating diverse summation and dot product algorithms, we aim to mitigate
issues related to non-associativity. Additionally, our exploration of
fixed-point arithmetic reveals its advantages over floating-point methods,
demonstrating significant hardware efficiencies. Employing an empirical
approach, we ascertain the optimal bit-width necessary to attain an acceptable
level of accuracy, considering the inherent complexity of bit-width
optimization. |
---|---|
DOI: | 10.48550/arxiv.2401.08225 |