Novel Multi-Task Learning for Motion Magnification

Motion magnification techniques extend the perception scope of naked eyes to tiny variations, such as the muscle tremors and the mechanism vibrations. Current approaches are developed respectively from Lagrangian and Eulerian perspectives. However, these approaches either require complex computation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2023-10, Vol.33 (10), p.1-1
Hauptverfasser: Chen, Li, Peng, Cong, Zhao, Bingchao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Motion magnification techniques extend the perception scope of naked eyes to tiny variations, such as the muscle tremors and the mechanism vibrations. Current approaches are developed respectively from Lagrangian and Eulerian perspectives. However, these approaches either require complex computation or cannot distinguish subtle variations from noise. This paper proposes a novel motion magnification approach, fusing Lagrangian and Eulerian methods via multi-task learning. The approach is mainly developed from Eulerian methods for efficient inference and introduces optical flow from Lagrangian methods for precise motion perception. To optimize the training process, homoscedastic uncertainty is introduced to balance these tasks. To overcome the lack of real magnified images, this paper establishes a synthetic dataset by real images selected from public datasets. The dataset simulates tiny and magnified motions by image preprocessing and affine transformations. Through qualitative and quantitative experiments, the proposed approach outperforms previous ones with few artifacts and strong robustness to magnification factors, motion magnitude, and noise disturbance. Additionally, the optical flow subnet is evaluated by public benchmarks to demonstrate its motion extraction capacity and assistance offered to motion magnification.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3262777