Fractional means based method for multi-oriented keyword spotting in video/scene/license plate images

•A novel approach for keyword spotting in video, scene and license plate images.•Anew model based on fractional means for reducing background complexity.•The combination of Radon and Fourier coefficients to extract context features.•Minimum cost path based ring growing to restore missing characters....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2019-03, Vol.118, p.1-19
Hauptverfasser: Shivakumara, Palaiahnakote, Roy, Sangheeta, Jalab, Hamid A., Ibrahim, Rabha W., Pal, Umapada, Lu, Tong, Khare, Vijeta, Wahab, Ainuddin Wahid Bin Abdul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A novel approach for keyword spotting in video, scene and license plate images.•Anew model based on fractional means for reducing background complexity.•The combination of Radon and Fourier coefficients to extract context features.•Minimum cost path based ring growing to restore missing characters. Retrieving desired information from databases containing video, natural scene, and license plate images through keyword spotting is a big challenge to expert systems due to different complexities that occur because of background and foreground variations of texts in real-time environments. To reduce background complexity of input images, we introduce a new model based on fractional means that considers neighboring information of pixels to widen the gap between text and background. To do so, the process obtains text candidates with the help of k-means clustering. The proposed approach explores the combination of Radon and Fourier coefficients to define context features based on regular patterns given by coefficient distributions for foreground and background of text candidates. This process eliminates non-text candidates regardless of different font types and sizes, colors, orientations and scripts, and results in representatives of texts. The proposed approach then exploits the fact that text pixels share almost the same values to restore missing text components using Canny edge image by proposing a new idea of minimum cost path based ring growing, and then outputs keywords. Furthermore, the proposed approach extracts the same above-mentioned features locally and globally for spotting words from images. Experimental results on different benchmark databases, namely, ICDAR 2013, ICDAR 2015, YVT, NUS video data, ICDAR 2013, ICDAR 2015, SVT, MSRA, UCSC, Medialab and Uninusubria license plate data show that the proposed method is effective and useful compared to the existing methods.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2018.08.015