A neural model for text localization, transcription and named entity recognition in full pages
•The network localizes, transcribes and recognizes named entities in full page images.•The model benefits from task interdependence and bi-dimensional structure.•Exhaustive valuation on mixed printed and handwritten documents. In the last years, the consolidation of deep neural network architectures...
Gespeichert in:
Veröffentlicht in: | Pattern recognition letters 2020-08, Vol.136, p.219-227 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •The network localizes, transcribes and recognizes named entities in full page images.•The model benefits from task interdependence and bi-dimensional structure.•Exhaustive valuation on mixed printed and handwritten documents.
In the last years, the consolidation of deep neural network architectures for information extraction in document images has brought big improvements in the performance of each of the tasks involved in this process, consisting of text localization, transcription, and named entity recognition. However, this process is traditionally performed with separate methods for each task. In this work we propose an end-to-end model that combines a one stage object detection network with branches for the recognition of text and named entities respectively in a way that shared features can be learned simultaneously from the training error of each of the tasks. By doing so the model jointly performs handwritten text detection, transcription, and named entity recognition at page level with a single feed forward step. We exhaustively evaluate our approach on different datasets, discussing its advantages and limitations compared to sequential approaches. The results show that the model is capable of benefiting from shared features by simultaneously solving interdependent tasks. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2020.05.001 |