Analyzing features learned for Offline Signature Verification using Deep CNNs

Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the sys...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2016-08
Hauptverfasser: Hafemann, Luiz G, Sabourin, Robert, Oliveira, Luiz S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hafemann, Luiz G
Sabourin, Robert
Oliveira, Luiz S
description Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.
doi_str_mv 10.48550/arxiv.1607.04573
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1607_04573</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2075311097</sourcerecordid><originalsourceid>FETCH-LOGICAL-a527-36673bb1cb6cee98ab17089d48a03c83902eb061d6874ab9467ea2a16d0a3efb3</originalsourceid><addsrcrecordid>eNotj8tOwzAURC0kJKrSD2CFJdYp13b8yLIqT6m0Cyq2kZ1cV65CUuwEUb6ePljNYs6MdAi5YTDNjZRwb-NP-J4yBXoKudTigoy4ECwzOedXZJLSFgC40lxKMSJvs9Y2-9_QbqhH2w8RE23QxhZr6rtIV943oUX6HjbtqaYfGIMPle1D19IhHZcPiDs6Xy7TNbn0tkk4-c8xWT89rucv2WL1_DqfLTIruc6EUlo4xyqnKsTCWMc0mKLOjQVRGVEARweK1cro3LoiVxott0zVYAV6J8bk9nx7ci13MXzauC-PzuXJ-UDcnYld7L4GTH257YZ4UE0lBy0FY1Bo8QcnOFl4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2075311097</pqid></control><display><type>article</type><title>Analyzing features learned for Offline Signature Verification using Deep CNNs</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Hafemann, Luiz G ; Sabourin, Robert ; Oliveira, Luiz S</creator><creatorcontrib>Hafemann, Luiz G ; Sabourin, Robert ; Oliveira, Luiz S</creatorcontrib><description>Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1607.04573</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Classifiers ; Computer Science - Computer Vision and Pattern Recognition ; Datasets ; Feature extraction ; Graphology ; Handwriting ; Handwritten signature verification ; Performance enhancement ; Signatures ; Statistics - Machine Learning</subject><ispartof>arXiv.org, 2016-08</ispartof><rights>2016. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,781,882,27906</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.1607.04573$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/ICPR.2016.7900092$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Hafemann, Luiz G</creatorcontrib><creatorcontrib>Sabourin, Robert</creatorcontrib><creatorcontrib>Oliveira, Luiz S</creatorcontrib><title>Analyzing features learned for Offline Signature Verification using Deep CNNs</title><title>arXiv.org</title><description>Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.</description><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Graphology</subject><subject>Handwriting</subject><subject>Handwritten signature verification</subject><subject>Performance enhancement</subject><subject>Signatures</subject><subject>Statistics - Machine Learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURC0kJKrSD2CFJdYp13b8yLIqT6m0Cyq2kZ1cV65CUuwEUb6ePljNYs6MdAi5YTDNjZRwb-NP-J4yBXoKudTigoy4ECwzOedXZJLSFgC40lxKMSJvs9Y2-9_QbqhH2w8RE23QxhZr6rtIV943oUX6HjbtqaYfGIMPle1D19IhHZcPiDs6Xy7TNbn0tkk4-c8xWT89rucv2WL1_DqfLTIruc6EUlo4xyqnKsTCWMc0mKLOjQVRGVEARweK1cro3LoiVxott0zVYAV6J8bk9nx7ci13MXzauC-PzuXJ-UDcnYld7L4GTH257YZ4UE0lBy0FY1Bo8QcnOFl4</recordid><startdate>20160826</startdate><enddate>20160826</enddate><creator>Hafemann, Luiz G</creator><creator>Sabourin, Robert</creator><creator>Oliveira, Luiz S</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20160826</creationdate><title>Analyzing features learned for Offline Signature Verification using Deep CNNs</title><author>Hafemann, Luiz G ; Sabourin, Robert ; Oliveira, Luiz S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a527-36673bb1cb6cee98ab17089d48a03c83902eb061d6874ab9467ea2a16d0a3efb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Graphology</topic><topic>Handwriting</topic><topic>Handwritten signature verification</topic><topic>Performance enhancement</topic><topic>Signatures</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hafemann, Luiz G</creatorcontrib><creatorcontrib>Sabourin, Robert</creatorcontrib><creatorcontrib>Oliveira, Luiz S</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hafemann, Luiz G</au><au>Sabourin, Robert</au><au>Oliveira, Luiz S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Analyzing features learned for Offline Signature Verification using Deep CNNs</atitle><jtitle>arXiv.org</jtitle><date>2016-08-26</date><risdate>2016</risdate><eissn>2331-8422</eissn><abstract>Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1607.04573</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2016-08
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1607_04573
source arXiv.org; Free E- Journals
subjects Artificial neural networks
Classifiers
Computer Science - Computer Vision and Pattern Recognition
Datasets
Feature extraction
Graphology
Handwriting
Handwritten signature verification
Performance enhancement
Signatures
Statistics - Machine Learning
title Analyzing features learned for Offline Signature Verification using Deep CNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T04%3A02%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Analyzing%20features%20learned%20for%20Offline%20Signature%20Verification%20using%20Deep%20CNNs&rft.jtitle=arXiv.org&rft.au=Hafemann,%20Luiz%20G&rft.date=2016-08-26&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1607.04573&rft_dat=%3Cproquest_arxiv%3E2075311097%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2075311097&rft_id=info:pmid/&rfr_iscdi=true