Robust Classification of Smartphone Captured Handwritten Document Images Using Deep Learning
Document classification is a challenging research problem in Document image analysis, majorly the presence of degradations like low illumination, blur, and shadows. In this paper, a deep learning-based two-level hierarchical approach is proposed for classifying smartphone-captured handwritten docume...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024-12, p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Document classification is a challenging research problem in Document image analysis, majorly the presence of degradations like low illumination, blur, and shadows. In this paper, a deep learning-based two-level hierarchical approach is proposed for classifying smartphone-captured handwritten document images into three classes. The model is devised based on the pre-trained weights transferred from XceptionNet and the additional convolutional layers for feature extraction focused on high-frequency details from document images. A set of synthetically generated datasets concerning all three classes with two to three levels of degradation severity is considered for training the proposed deep learning model along with the real images labelled based on reference datasets. The dataset comprises blur and low illumination images with three levels of severity, and shadow images with two levels of severity. About 2,841 images from classes with 783 for blur, 828 for low illumination, and 1,230 for shadow were collected. The proposed model is compared with nine state-of-the-art deep-learning classification models. The proposed model outperforms the state-of-the-art models by addressing the problem of overfitting. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3520327 |