Deep interpretable architecture for plant diseases classification
Recently, many works have been inspired by the success of deep learning in computer vision for plant diseases classification. Unfortunately, these end-to-end deep classifiers lack transparency which can limit their adoption in practice. In this paper, we propose a new trainable visualization method...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Brahimi, Mohammed Mahmoudi, Said Boukhalfa, Kamel Moussaoui, Abdelouhab |
description | Recently, many works have been inspired by the success of deep learning in
computer vision for plant diseases classification. Unfortunately, these
end-to-end deep classifiers lack transparency which can limit their adoption in
practice. In this paper, we propose a new trainable visualization method for
plant diseases classification based on a Convolutional Neural Network (CNN)
architecture composed of two deep classifiers. The first one is named Teacher
and the second one Student. This architecture leverages the multitask learning
to train the Teacher and the Student jointly. Then, the communicated
representation between the Teacher and the Student is used as a proxy to
visualize the most important image regions for classification. This new
architecture produces sharper visualization than the existing methods in plant
diseases context. All experiments are achieved on PlantVillage dataset that
contains 54306 plant images. |
doi_str_mv | 10.48550/arxiv.1905.13523 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_13523</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_13523</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-2df940b82e437ec57484ca10d83b8e6827dfa5717455ea760a84a103d23d9b6a3</originalsourceid><addsrcrecordid>eNotz71qwzAUhmEtHUraC-gU3YBdSUey5DGkvxDIkt0cS0dE4DpGUkN6923TTt_wwgcPYw9StNoZIx4xX9K5lb0wrQSj4JZtnogWnuZKeclUcZyIY_bHVMnXz0w8njJfJpwrD6kQFircT1hKisljTaf5jt1EnArd_--KHV6eD9u3Zrd_fd9udg12FhoVYq_F6BRpsOSN1U57lCI4GB11TtkQ0VhptTGEthPo9E-GoCD0Y4ewYuu_26thWHL6wPw1_FqGqwW-AdmgRJs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep interpretable architecture for plant diseases classification</title><source>arXiv.org</source><creator>Brahimi, Mohammed ; Mahmoudi, Said ; Boukhalfa, Kamel ; Moussaoui, Abdelouhab</creator><creatorcontrib>Brahimi, Mohammed ; Mahmoudi, Said ; Boukhalfa, Kamel ; Moussaoui, Abdelouhab</creatorcontrib><description>Recently, many works have been inspired by the success of deep learning in
computer vision for plant diseases classification. Unfortunately, these
end-to-end deep classifiers lack transparency which can limit their adoption in
practice. In this paper, we propose a new trainable visualization method for
plant diseases classification based on a Convolutional Neural Network (CNN)
architecture composed of two deep classifiers. The first one is named Teacher
and the second one Student. This architecture leverages the multitask learning
to train the Teacher and the Student jointly. Then, the communicated
representation between the Teacher and the Student is used as a proxy to
visualize the most important image regions for classification. This new
architecture produces sharper visualization than the existing methods in plant
diseases context. All experiments are achieved on PlantVillage dataset that
contains 54306 plant images.</description><identifier>DOI: 10.48550/arxiv.1905.13523</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.13523$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.13523$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Brahimi, Mohammed</creatorcontrib><creatorcontrib>Mahmoudi, Said</creatorcontrib><creatorcontrib>Boukhalfa, Kamel</creatorcontrib><creatorcontrib>Moussaoui, Abdelouhab</creatorcontrib><title>Deep interpretable architecture for plant diseases classification</title><description>Recently, many works have been inspired by the success of deep learning in
computer vision for plant diseases classification. Unfortunately, these
end-to-end deep classifiers lack transparency which can limit their adoption in
practice. In this paper, we propose a new trainable visualization method for
plant diseases classification based on a Convolutional Neural Network (CNN)
architecture composed of two deep classifiers. The first one is named Teacher
and the second one Student. This architecture leverages the multitask learning
to train the Teacher and the Student jointly. Then, the communicated
representation between the Teacher and the Student is used as a proxy to
visualize the most important image regions for classification. This new
architecture produces sharper visualization than the existing methods in plant
diseases context. All experiments are achieved on PlantVillage dataset that
contains 54306 plant images.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71qwzAUhmEtHUraC-gU3YBdSUey5DGkvxDIkt0cS0dE4DpGUkN6923TTt_wwgcPYw9StNoZIx4xX9K5lb0wrQSj4JZtnogWnuZKeclUcZyIY_bHVMnXz0w8njJfJpwrD6kQFircT1hKisljTaf5jt1EnArd_--KHV6eD9u3Zrd_fd9udg12FhoVYq_F6BRpsOSN1U57lCI4GB11TtkQ0VhptTGEthPo9E-GoCD0Y4ewYuu_26thWHL6wPw1_FqGqwW-AdmgRJs</recordid><startdate>20190531</startdate><enddate>20190531</enddate><creator>Brahimi, Mohammed</creator><creator>Mahmoudi, Said</creator><creator>Boukhalfa, Kamel</creator><creator>Moussaoui, Abdelouhab</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190531</creationdate><title>Deep interpretable architecture for plant diseases classification</title><author>Brahimi, Mohammed ; Mahmoudi, Said ; Boukhalfa, Kamel ; Moussaoui, Abdelouhab</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-2df940b82e437ec57484ca10d83b8e6827dfa5717455ea760a84a103d23d9b6a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Brahimi, Mohammed</creatorcontrib><creatorcontrib>Mahmoudi, Said</creatorcontrib><creatorcontrib>Boukhalfa, Kamel</creatorcontrib><creatorcontrib>Moussaoui, Abdelouhab</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Brahimi, Mohammed</au><au>Mahmoudi, Said</au><au>Boukhalfa, Kamel</au><au>Moussaoui, Abdelouhab</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep interpretable architecture for plant diseases classification</atitle><date>2019-05-31</date><risdate>2019</risdate><abstract>Recently, many works have been inspired by the success of deep learning in
computer vision for plant diseases classification. Unfortunately, these
end-to-end deep classifiers lack transparency which can limit their adoption in
practice. In this paper, we propose a new trainable visualization method for
plant diseases classification based on a Convolutional Neural Network (CNN)
architecture composed of two deep classifiers. The first one is named Teacher
and the second one Student. This architecture leverages the multitask learning
to train the Teacher and the Student jointly. Then, the communicated
representation between the Teacher and the Student is used as a proxy to
visualize the most important image regions for classification. This new
architecture produces sharper visualization than the existing methods in plant
diseases context. All experiments are achieved on PlantVillage dataset that
contains 54306 plant images.</abstract><doi>10.48550/arxiv.1905.13523</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1905.13523 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1905_13523 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Deep interpretable architecture for plant diseases classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T02%3A46%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20interpretable%20architecture%20for%20plant%20diseases%20classification&rft.au=Brahimi,%20Mohammed&rft.date=2019-05-31&rft_id=info:doi/10.48550/arxiv.1905.13523&rft_dat=%3Carxiv_GOX%3E1905_13523%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |