Where Do We Stand with Implicit Neural Representations? A Technical and Performance Survey
Implicit Neural Representations (INRs) have emerged as a paradigm in knowledge representation, offering exceptional flexibility and performance across a diverse range of applications. INRs leverage multilayer perceptrons (MLPs) to model data as continuous implicit functions, providing critical advan...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Implicit Neural Representations (INRs) have emerged as a paradigm in
knowledge representation, offering exceptional flexibility and performance
across a diverse range of applications. INRs leverage multilayer perceptrons
(MLPs) to model data as continuous implicit functions, providing critical
advantages such as resolution independence, memory efficiency, and
generalisation beyond discretised data structures. Their ability to solve
complex inverse problems makes them particularly effective for tasks including
audio reconstruction, image representation, 3D object reconstruction, and
high-dimensional data synthesis. This survey provides a comprehensive review of
state-of-the-art INR methods, introducing a clear taxonomy that categorises
them into four key areas: activation functions, position encoding, combined
strategies, and network structure optimisation. We rigorously analyse their
critical properties, such as full differentiability, smoothness, compactness,
and adaptability to varying resolutions while also examining their strengths
and limitations in addressing locality biases and capturing fine details. Our
experimental comparison offers new insights into the trade-offs between
different approaches, showcasing the capabilities and challenges of the latest
INR techniques across various tasks. In addition to identifying areas where
current methods excel, we highlight key limitations and potential avenues for
improvement, such as developing more expressive activation functions, enhancing
positional encoding mechanisms, and improving scalability for complex,
high-dimensional data. This survey serves as a roadmap for researchers,
offering practical guidance for future exploration in the field of INRs. We aim
to foster new methodologies by outlining promising research directions for INRs
and applications. |
---|---|
DOI: | 10.48550/arxiv.2411.03688 |