VGTS: Visually Guided Text Spotting for novel categories in historical manuscripts

In the field of historical manuscript research, scholars frequently encounter novel symbols in ancient texts, investing considerable effort in their identification and documentation. Although existing object detection methods achieve impressive performance on known categories, they struggle to recog...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2025-02, Vol.261, p.125557, Article 125557
Hauptverfasser: Hu, Wenbo, Zhan, Hongjian, Ma, Xinchen, Liu, Cong, Yin, Bing, Lu, Yue, Suen, Ching Y.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the field of historical manuscript research, scholars frequently encounter novel symbols in ancient texts, investing considerable effort in their identification and documentation. Although existing object detection methods achieve impressive performance on known categories, they struggle to recognize novel symbols without retraining. To address this limitation, we propose a Visually Guided Text Spotting (VGTS) approach that accurately spots novel characters using just one annotated support sample. The core of VGTS is a spatial alignment module consisting of a Dual Spatial Attention (DSA) block and a Geometric Matching (GM) block. The DSA block aims to identify, focus on, and learn discriminative spatial regions in the support and query images, mimicking the human visual spotting process. It first refines the support image by analyzing inter-channel relationships to identify critical areas, and then refines the query image by focusing on informative key points. The GM block, on the other hand, establishes the spatial correspondence between the two images, enabling accurate localization of the target character in the query image. To tackle the example imbalance problem in low-resource spotting tasks, we develop a novel torus loss function that enhances the discriminative power of the embedding space for distance metric learning. To further validate our approach, we introduce a new dataset featuring ancient Dongba hieroglyphics (DBH) associated with the Naxi minority of China. Extensive experiments on the DBH dataset and other public datasets, including Egyptian Hieroglyph (EGY), Historical Arabic Documents (HAD), Tripitaka Koreana in Han (TKH), and Notary Charters (NC), show that VGTS consistently surpasses state-of-the-art methods. The proposed framework exhibits great potential for application in historical manuscript text spotting, enabling scholars to efficiently identify and document novel symbols with minimal annotation effort. •We propose a flexible text spotting model for novel characters.•We propose an innovative Dual Spatial Attention (DSA) block for discriminative feature extraction.•We propose the torus loss for challenging samples and effective distance metric learning.•A new dataset of ancient script manuscripts for low-resource text spotting research.
ISSN:0957-4174
DOI:10.1016/j.eswa.2024.125557