Performance comparisons of classification techniques for multi-font character recognition
This paper reports the performance of several neural network models on the problem of multi-font character recognition. The networks are trained on machine generated, upper-case English letters in selected fonts. The task is to recognize the same letters in different fonts. The results presented her...
Gespeichert in:
Veröffentlicht in: | International journal of human-computer studies 1994-03, Vol.40 (3), p.403-423 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper reports the performance of several neural network models on the problem of multi-font character recognition. The networks are trained on machine generated, upper-case English letters in selected fonts. The task is to recognize the same letters in different fonts. The results presented here were produced by back-propagation networks, radial basis networks and a new hybrid algorithm which is a combination of the two. These results are compared to those of the Hogg-Hubermann model as well as to those of nearest neighbor and maximum likelihood classifiers. The effects of varying the number of nodes in the hidden layer, the initial conditions, and the number of iterations in a back-propagation network were studied. The experimental results indicate that the number of nodes is an important factor in the recognition rate and that over-training is a significant problem. Different initial conditions also had a measurable effect on performance. The radial basis experiments used different numbers of centers and differing techniques for selecting the means and standard deviations. The best results were obtained with one center per training vector in which the standard deviation for each center was set to the same small number. Finally, a new hybrid technique is discussed in which a radial basis network is used to determine a starting point for a back-propagation network. The back-propagation network refines the radial basis means and standard deviations which are replaced in the radial basis network and used for another iteration. All three networks out-performed the Hogg-Hubermann network as well as the maximum likelihood classifiers. |
---|---|
ISSN: | 1071-5819 1095-9300 |
DOI: | 10.1006/ijhc.1994.1018 |