Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks

While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2021-11, Vol.32 (11), p.4826-4838
Hauptverfasser: Islam, Muhammad Aminul, Murray, Bryce, Buck, Andrew, Anderson, Derek T., Scott, Grant J., Popescu, Mihail, Keller, James
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4838
container_issue 11
container_start_page 4826
container_title IEEE transaction on neural networks and learning systems
container_volume 32
creator Islam, Muhammad Aminul
Murray, Bryce
Buck, Andrew
Anderson, Derek T.
Scott, Grant J.
Popescu, Mihail
Keller, James
description While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage that it considers both foreground information and background information when evaluating the target shape in an image. In this article, we identify limitations in the existing hit-or-miss neural definitions and formulate an optimization problem to learn the transform relative to deeper architectures. To this end, we model the semantically important condition that the intersection of the hit and miss structuring elements (SEs) should be empty and present a way to express Don't Care (DNC), which is important for denoting regions of an SE that are not relevant to detecting a target pattern. Our analysis shows that convolution, in fact, acts like a hit-to-miss transform through semantic interpretation of its filter differences. On these premises, we introduce an extension that outperforms conventional convolution on benchmark data. Quantitative experiments are provided on synthetic and benchmark data, showing that the direct encoding hit-or-miss transform provides better interpretability on learned shapes consistent with objects, whereas our morphologically inspired generalized convolution yields higher classification accuracy. Finally, qualitative hit and miss filter visualizations are provided relative to single morphological layer.
doi_str_mv 10.1109/TNNLS.2020.3025723
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9214880</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9214880</ieee_id><sourcerecordid>2449177227</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-913b5f8c073712e97490f027e2871d67369e1642621cc8f71bee4ab5e4036b723</originalsourceid><addsrcrecordid>eNpdkF1LwzAUhoMoKuofUJCCN950JidpPi5Fpw7mvHCCd6HtTrXaNTNpUf-9mZu7MDcJ5HlfznkIOWZ0wBg1F9PJZPw4AAp0wClkCvgW2QcmIQWu9fbmrZ73yFEIbzQeSTMpzC7Z4zHCjOD7ZDT86rCd1e1L0r1icu_84tU17qUu8ya5q7vU-fS-DiGZ-rwNlfPzpHPJNeIimWDvIzTB7tP593BIdqq8CXi0vg_I081wenWXjh9uR1eX47TkGetSw3iRVbqkiisGaJQwtKKgELRiM6m4NMikAAmsLHWlWIEo8iJDQbks4poH5HzVu_Duo8fQ2XkdSmyavEXXBwtCGKYUgIro2T_0zfW-jdNZyLTMDNWZiBSsqNK7EDxWduHree6_LaN26dr-urZL13btOoZO19V9McfZJvJnNgInK6BGxM23ASa0pvwHZwWAIw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2586590854</pqid></control><display><type>article</type><title>Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Islam, Muhammad Aminul ; Murray, Bryce ; Buck, Andrew ; Anderson, Derek T. ; Scott, Grant J. ; Popescu, Mihail ; Keller, James</creator><creatorcontrib>Islam, Muhammad Aminul ; Murray, Bryce ; Buck, Andrew ; Anderson, Derek T. ; Scott, Grant J. ; Popescu, Mihail ; Keller, James</creatorcontrib><description>While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage that it considers both foreground information and background information when evaluating the target shape in an image. In this article, we identify limitations in the existing hit-or-miss neural definitions and formulate an optimization problem to learn the transform relative to deeper architectures. To this end, we model the semantically important condition that the intersection of the hit and miss structuring elements (SEs) should be empty and present a way to express Don't Care (DNC), which is important for denoting regions of an SE that are not relevant to detecting a target pattern. Our analysis shows that convolution, in fact, acts like a hit-to-miss transform through semantic interpretation of its filter differences. On these premises, we introduce an extension that outperforms conventional convolution on benchmark data. Quantitative experiments are provided on synthetic and benchmark data, showing that the direct encoding hit-or-miss transform provides better interpretability on learned shapes consistent with objects, whereas our morphologically inspired generalized convolution yields higher classification accuracy. Finally, qualitative hit and miss filter visualizations are provided relative to single morphological layer.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2020.3025723</identifier><identifier>PMID: 33021943</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Benchmarks ; Convolution ; convolutional neural network (CNN) ; Convolutional neural networks ; Deep learning ; Deep Learning - trends ; Gray-scale ; hit-or-miss transform ; Humans ; Machine learning ; Morphology ; Neural networks ; Neural Networks, Computer ; Optimization ; Pattern analysis ; Pattern Recognition, Automated - methods ; Pattern Recognition, Automated - trends ; Target detection ; Transforms</subject><ispartof>IEEE transaction on neural networks and learning systems, 2021-11, Vol.32 (11), p.4826-4838</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-913b5f8c073712e97490f027e2871d67369e1642621cc8f71bee4ab5e4036b723</citedby><cites>FETCH-LOGICAL-c351t-913b5f8c073712e97490f027e2871d67369e1642621cc8f71bee4ab5e4036b723</cites><orcidid>0000-0001-5870-9387 ; 0000-0002-0306-7142 ; 0000-0002-6221-3090 ; 0000-0003-4909-029X ; 0000-0001-5581-4165</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9214880$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9214880$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33021943$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Islam, Muhammad Aminul</creatorcontrib><creatorcontrib>Murray, Bryce</creatorcontrib><creatorcontrib>Buck, Andrew</creatorcontrib><creatorcontrib>Anderson, Derek T.</creatorcontrib><creatorcontrib>Scott, Grant J.</creatorcontrib><creatorcontrib>Popescu, Mihail</creatorcontrib><creatorcontrib>Keller, James</creatorcontrib><title>Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage that it considers both foreground information and background information when evaluating the target shape in an image. In this article, we identify limitations in the existing hit-or-miss neural definitions and formulate an optimization problem to learn the transform relative to deeper architectures. To this end, we model the semantically important condition that the intersection of the hit and miss structuring elements (SEs) should be empty and present a way to express Don't Care (DNC), which is important for denoting regions of an SE that are not relevant to detecting a target pattern. Our analysis shows that convolution, in fact, acts like a hit-to-miss transform through semantic interpretation of its filter differences. On these premises, we introduce an extension that outperforms conventional convolution on benchmark data. Quantitative experiments are provided on synthetic and benchmark data, showing that the direct encoding hit-or-miss transform provides better interpretability on learned shapes consistent with objects, whereas our morphologically inspired generalized convolution yields higher classification accuracy. Finally, qualitative hit and miss filter visualizations are provided relative to single morphological layer.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Benchmarks</subject><subject>Convolution</subject><subject>convolutional neural network (CNN)</subject><subject>Convolutional neural networks</subject><subject>Deep learning</subject><subject>Deep Learning - trends</subject><subject>Gray-scale</subject><subject>hit-or-miss transform</subject><subject>Humans</subject><subject>Machine learning</subject><subject>Morphology</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Optimization</subject><subject>Pattern analysis</subject><subject>Pattern Recognition, Automated - methods</subject><subject>Pattern Recognition, Automated - trends</subject><subject>Target detection</subject><subject>Transforms</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkF1LwzAUhoMoKuofUJCCN950JidpPi5Fpw7mvHCCd6HtTrXaNTNpUf-9mZu7MDcJ5HlfznkIOWZ0wBg1F9PJZPw4AAp0wClkCvgW2QcmIQWu9fbmrZ73yFEIbzQeSTMpzC7Z4zHCjOD7ZDT86rCd1e1L0r1icu_84tU17qUu8ya5q7vU-fS-DiGZ-rwNlfPzpHPJNeIimWDvIzTB7tP593BIdqq8CXi0vg_I081wenWXjh9uR1eX47TkGetSw3iRVbqkiisGaJQwtKKgELRiM6m4NMikAAmsLHWlWIEo8iJDQbks4poH5HzVu_Duo8fQ2XkdSmyavEXXBwtCGKYUgIro2T_0zfW-jdNZyLTMDNWZiBSsqNK7EDxWduHree6_LaN26dr-urZL13btOoZO19V9McfZJvJnNgInK6BGxM23ASa0pvwHZwWAIw</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Islam, Muhammad Aminul</creator><creator>Murray, Bryce</creator><creator>Buck, Andrew</creator><creator>Anderson, Derek T.</creator><creator>Scott, Grant J.</creator><creator>Popescu, Mihail</creator><creator>Keller, James</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5870-9387</orcidid><orcidid>https://orcid.org/0000-0002-0306-7142</orcidid><orcidid>https://orcid.org/0000-0002-6221-3090</orcidid><orcidid>https://orcid.org/0000-0003-4909-029X</orcidid><orcidid>https://orcid.org/0000-0001-5581-4165</orcidid></search><sort><creationdate>20211101</creationdate><title>Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks</title><author>Islam, Muhammad Aminul ; Murray, Bryce ; Buck, Andrew ; Anderson, Derek T. ; Scott, Grant J. ; Popescu, Mihail ; Keller, James</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-913b5f8c073712e97490f027e2871d67369e1642621cc8f71bee4ab5e4036b723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Benchmarks</topic><topic>Convolution</topic><topic>convolutional neural network (CNN)</topic><topic>Convolutional neural networks</topic><topic>Deep learning</topic><topic>Deep Learning - trends</topic><topic>Gray-scale</topic><topic>hit-or-miss transform</topic><topic>Humans</topic><topic>Machine learning</topic><topic>Morphology</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Optimization</topic><topic>Pattern analysis</topic><topic>Pattern Recognition, Automated - methods</topic><topic>Pattern Recognition, Automated - trends</topic><topic>Target detection</topic><topic>Transforms</topic><toplevel>online_resources</toplevel><creatorcontrib>Islam, Muhammad Aminul</creatorcontrib><creatorcontrib>Murray, Bryce</creatorcontrib><creatorcontrib>Buck, Andrew</creatorcontrib><creatorcontrib>Anderson, Derek T.</creatorcontrib><creatorcontrib>Scott, Grant J.</creatorcontrib><creatorcontrib>Popescu, Mihail</creatorcontrib><creatorcontrib>Keller, James</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Islam, Muhammad Aminul</au><au>Murray, Bryce</au><au>Buck, Andrew</au><au>Anderson, Derek T.</au><au>Scott, Grant J.</au><au>Popescu, Mihail</au><au>Keller, James</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2021-11-01</date><risdate>2021</risdate><volume>32</volume><issue>11</issue><spage>4826</spage><epage>4838</epage><pages>4826-4838</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage that it considers both foreground information and background information when evaluating the target shape in an image. In this article, we identify limitations in the existing hit-or-miss neural definitions and formulate an optimization problem to learn the transform relative to deeper architectures. To this end, we model the semantically important condition that the intersection of the hit and miss structuring elements (SEs) should be empty and present a way to express Don't Care (DNC), which is important for denoting regions of an SE that are not relevant to detecting a target pattern. Our analysis shows that convolution, in fact, acts like a hit-to-miss transform through semantic interpretation of its filter differences. On these premises, we introduce an extension that outperforms conventional convolution on benchmark data. Quantitative experiments are provided on synthetic and benchmark data, showing that the direct encoding hit-or-miss transform provides better interpretability on learned shapes consistent with objects, whereas our morphologically inspired generalized convolution yields higher classification accuracy. Finally, qualitative hit and miss filter visualizations are provided relative to single morphological layer.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33021943</pmid><doi>10.1109/TNNLS.2020.3025723</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-5870-9387</orcidid><orcidid>https://orcid.org/0000-0002-0306-7142</orcidid><orcidid>https://orcid.org/0000-0002-6221-3090</orcidid><orcidid>https://orcid.org/0000-0003-4909-029X</orcidid><orcidid>https://orcid.org/0000-0001-5581-4165</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2021-11, Vol.32 (11), p.4826-4838
issn 2162-237X
2162-2388
language eng
recordid cdi_ieee_primary_9214880
source IEEE Electronic Library (IEL)
subjects Algorithms
Artificial neural networks
Benchmarks
Convolution
convolutional neural network (CNN)
Convolutional neural networks
Deep learning
Deep Learning - trends
Gray-scale
hit-or-miss transform
Humans
Machine learning
Morphology
Neural networks
Neural Networks, Computer
Optimization
Pattern analysis
Pattern Recognition, Automated - methods
Pattern Recognition, Automated - trends
Target detection
Transforms
title Extending the Morphological Hit-or-Miss Transform to Deep Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T13%3A53%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Extending%20the%20Morphological%20Hit-or-Miss%20Transform%20to%20Deep%20Neural%20Networks&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Islam,%20Muhammad%20Aminul&rft.date=2021-11-01&rft.volume=32&rft.issue=11&rft.spage=4826&rft.epage=4838&rft.pages=4826-4838&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2020.3025723&rft_dat=%3Cproquest_RIE%3E2449177227%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2586590854&rft_id=info:pmid/33021943&rft_ieee_id=9214880&rfr_iscdi=true