Efficient Online Processing with Deep Neural Networks
The capabilities and adoption of deep neural networks (DNNs) grow at an exhilarating pace: Vision models accurately classify human actions in videos and identify cancerous tissue in medical scans as precisely than human experts; large language models answer wide-ranging questions, generate code, and...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The capabilities and adoption of deep neural networks (DNNs) grow at an
exhilarating pace: Vision models accurately classify human actions in videos
and identify cancerous tissue in medical scans as precisely than human experts;
large language models answer wide-ranging questions, generate code, and write
prose, becoming the topic of everyday dinner-table conversations. Even though
their uses are exhilarating, the continually increasing model sizes and
computational complexities have a dark side. The economic cost and negative
environmental externalities of training and serving models is in evident
disharmony with financial viability and climate action goals.
Instead of pursuing yet another increase in predictive performance, this
dissertation is dedicated to the improvement of neural network efficiency.
Specifically, a core contribution addresses the efficiency aspects during
online inference. Here, the concept of Continual Inference Networks (CINs) is
proposed and explored across four publications. CINs extend prior
state-of-the-art methods developed for offline processing of spatio-temporal
data and reuse their pre-trained weights, improving their online processing
efficiency by an order of magnitude. These advances are attained through a
bottom-up computational reorganization and judicious architectural
modifications. The benefit to online inference is demonstrated by reformulating
several widely used network architectures into CINs, including 3D CNNs,
ST-GCNs, and Transformer Encoders. An orthogonal contribution tackles the
concurrent adaptation and computational acceleration of a large source model
into multiple lightweight derived models. Drawing on fusible adapter networks
and structured pruning, Structured Pruning Adapters achieve superior predictive
accuracy under aggressive pruning using significantly fewer learned weights
compared to fine-tuning with pruning. |
---|---|
DOI: | 10.48550/arxiv.2306.13474 |