Spiking PointNet: Spiking Neural Networks for Point Clouds
Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency, have drawn much research attention on 2D visual recognition and shown gradually increasing application potential. However, it still remains underexplored whether SNNs can be generalized to 3D recognition. To this end, we p...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency,
have drawn much research attention on 2D visual recognition and shown gradually
increasing application potential. However, it still remains underexplored
whether SNNs can be generalized to 3D recognition. To this end, we present
Spiking PointNet in the paper, the first spiking neural model for efficient
deep learning on point clouds. We discover that the two huge obstacles limiting
the application of SNNs in point clouds are: the intrinsic optimization
obstacle of SNNs that impedes the training of a big spiking model with large
time steps, and the expensive memory and computation cost of PointNet that
makes training a big spiking point model unrealistic. To solve the problems
simultaneously, we present a trained-less but learning-more paradigm for
Spiking PointNet with theoretical justifications and in-depth experimental
analysis. In specific, our Spiking PointNet is trained with only a single time
step but can obtain better performance with multiple time steps inference,
compared to the one trained directly with multiple time steps. We conduct
various experiments on ModelNet10, ModelNet40 to demonstrate the effectiveness
of Spiking PointNet. Notably, our Spiking PointNet even can outperform its ANN
counterpart, which is rare in the SNN field thus providing a potential research
direction for the following work. Moreover, Spiking PointNet shows impressive
speedup and storage saving in the training phase. |
---|---|
DOI: | 10.48550/arxiv.2310.06232 |