Super-resolution image reconstruction through coupled dictionary learning

Images with a high resolution are challenging to capture owing to the speed of motion that occurs when they are acquired. On the other hand, 2D anisotropic stack volumes are more prevalent, and it is greatly driven by both viewing and analysis to improve the resolution of them. Current image enhance...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sudhamalla, Mallesh, Gotte, Vijaya Lakshmi, Kumar, K. Bharath
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Images with a high resolution are challenging to capture owing to the speed of motion that occurs when they are acquired. On the other hand, 2D anisotropic stack volumes are more prevalent, and it is greatly driven by both viewing and analysis to improve the resolution of them. Current image enhancement approaches usually achieve good results only through simple interpolation, which is due to the absence of effective reconstruction approaches that deal with non-rigid motion. To this end, we consider the use of super-resolution technique, making use of extensive training data and inaccurate alignment, in order to assist with patch-based reconstruction with reduced training data requirements. We can achieve an accurate picture reconstruction in the presence of high data redundancy, which is something that only happens when we make use of a patch scale. To do this, high-resolution and low-resolution patch dictionaries are trained simultaneously on high-resolution patch sequences, ensuring that both low-resolution and high-resolution patch representations have a similar relationship. After these dictionaries have been utilized for rebuilding from a low-resolution representation of a similar anatomy, these workflows help perform advanced functionality on medical images. By demonstrating substantial improvements in the reconstruction methodology, we show notable improvements in the typical interpolation process.
ISSN:0094-243X
1551-7616
DOI:10.1063/5.0125175