Robust super-resolution for interactive video navigation

One of the main technical limitations in interactive systems for next-generation audio-visual experiences is the limited resolution of the captured contents. The presented method tackles the problem of generating super-resolved versions of input video frames, thus allowing the user to visualize the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Salvador, J., Kochale, A., Borsum, M.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:One of the main technical limitations in interactive systems for next-generation audio-visual experiences is the limited resolution of the captured contents. The presented method tackles the problem of generating super-resolved versions of input video frames, thus allowing the user to visualize the captured visual contents at any desired scale with minimal degradation. First, the low-frequency band of the super-resolved video frame is estimated as an up-scaled interpolation of the low-resolution frame. Then, the high-frequency band is extrapolated from the low-resolution frame by exploiting local cross-scale self-similarity. The introduction of a suitable image prior in both stages allows to robustly enhance the spatial resolution even in video sequences containing aliasing. The most demanding processing stages of the presented algorithm have been implemented on graphics hardware (GPU). The experimental results show a level of quality similar to that of state-of-the-art methods, with the advantages of real-time processing and robustness against spatial aliasing.
ISSN:2166-6814
2166-6822
DOI:10.1109/ICCE-Berlin.2012.6336528