Gradient Frequency Modulation for Visually Explaining Video Understanding Models
In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the ar...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-11 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lin, Xinmiao Bao, Wentao Wright, Matthew Kong, Yu |
description | In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the area of video understanding. Due to the temporal dimension of video data, the main challenge of explaining a video action recognition model is to produce spatiotemporally consistent visual explanations, which has been ignored in the existing literature. In this paper, we propose Frequency-based Extremal Perturbation (F-EP) to explain a video understanding model's decisions. Because the explanations given by perturbation methods are noisy and non-smooth both spatially and temporally, we propose to modulate the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT). We show in a range of experiments that F-EP provides more spatiotemporally consistent explanations that more faithfully represent the model's decisions compared to the existing state-of-the-art methods. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2592754756</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2592754756</sourcerecordid><originalsourceid>FETCH-proquest_journals_25927547563</originalsourceid><addsrcrecordid>eNqNjMsKwjAURIMgWLT_EHBdqEnT6lpa3Qgu1G0J5lZSwk3NA_TvjeAHuBo4c2ZmJGOcb4ptxdiC5N6PZVmyumFC8IycD04qDRho5-AZAe9verIqGhm0RTpYR2_aR2nMm7avyUiNGh-JKbD0igqcDxLVl6UZGL8i80EaD_kvl2TdtZf9sZicTf8-9KONDlPVM7FjjagaUfP_rA-s5D-Z</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2592754756</pqid></control><display><type>article</type><title>Gradient Frequency Modulation for Visually Explaining Video Understanding Models</title><source>Free E- Journals</source><creator>Lin, Xinmiao ; Bao, Wentao ; Wright, Matthew ; Kong, Yu</creator><creatorcontrib>Lin, Xinmiao ; Bao, Wentao ; Wright, Matthew ; Kong, Yu</creatorcontrib><description>In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the area of video understanding. Due to the temporal dimension of video data, the main challenge of explaining a video action recognition model is to produce spatiotemporally consistent visual explanations, which has been ignored in the existing literature. In this paper, we propose Frequency-based Extremal Perturbation (F-EP) to explain a video understanding model's decisions. Because the explanations given by perturbation methods are noisy and non-smooth both spatially and temporally, we propose to modulate the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT). We show in a range of experiments that F-EP provides more spatiotemporally consistent explanations that more faithfully represent the model's decisions compared to the existing state-of-the-art methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Decisions ; Deep learning ; Discrete cosine transform ; Frequency modulation ; Machine learning ; Neural networks ; Perturbation methods ; Video data</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Lin, Xinmiao</creatorcontrib><creatorcontrib>Bao, Wentao</creatorcontrib><creatorcontrib>Wright, Matthew</creatorcontrib><creatorcontrib>Kong, Yu</creatorcontrib><title>Gradient Frequency Modulation for Visually Explaining Video Understanding Models</title><title>arXiv.org</title><description>In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the area of video understanding. Due to the temporal dimension of video data, the main challenge of explaining a video action recognition model is to produce spatiotemporally consistent visual explanations, which has been ignored in the existing literature. In this paper, we propose Frequency-based Extremal Perturbation (F-EP) to explain a video understanding model's decisions. Because the explanations given by perturbation methods are noisy and non-smooth both spatially and temporally, we propose to modulate the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT). We show in a range of experiments that F-EP provides more spatiotemporally consistent explanations that more faithfully represent the model's decisions compared to the existing state-of-the-art methods.</description><subject>Decisions</subject><subject>Deep learning</subject><subject>Discrete cosine transform</subject><subject>Frequency modulation</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Perturbation methods</subject><subject>Video data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjMsKwjAURIMgWLT_EHBdqEnT6lpa3Qgu1G0J5lZSwk3NA_TvjeAHuBo4c2ZmJGOcb4ptxdiC5N6PZVmyumFC8IycD04qDRho5-AZAe9verIqGhm0RTpYR2_aR2nMm7avyUiNGh-JKbD0igqcDxLVl6UZGL8i80EaD_kvl2TdtZf9sZicTf8-9KONDlPVM7FjjagaUfP_rA-s5D-Z</recordid><startdate>20211130</startdate><enddate>20211130</enddate><creator>Lin, Xinmiao</creator><creator>Bao, Wentao</creator><creator>Wright, Matthew</creator><creator>Kong, Yu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211130</creationdate><title>Gradient Frequency Modulation for Visually Explaining Video Understanding Models</title><author>Lin, Xinmiao ; Bao, Wentao ; Wright, Matthew ; Kong, Yu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25927547563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Decisions</topic><topic>Deep learning</topic><topic>Discrete cosine transform</topic><topic>Frequency modulation</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Perturbation methods</topic><topic>Video data</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Xinmiao</creatorcontrib><creatorcontrib>Bao, Wentao</creatorcontrib><creatorcontrib>Wright, Matthew</creatorcontrib><creatorcontrib>Kong, Yu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lin, Xinmiao</au><au>Bao, Wentao</au><au>Wright, Matthew</au><au>Kong, Yu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Gradient Frequency Modulation for Visually Explaining Video Understanding Models</atitle><jtitle>arXiv.org</jtitle><date>2021-11-30</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks. Because of this, increasing attention has been paid to explainability in deep learning, including in the area of video understanding. Due to the temporal dimension of video data, the main challenge of explaining a video action recognition model is to produce spatiotemporally consistent visual explanations, which has been ignored in the existing literature. In this paper, we propose Frequency-based Extremal Perturbation (F-EP) to explain a video understanding model's decisions. Because the explanations given by perturbation methods are noisy and non-smooth both spatially and temporally, we propose to modulate the frequencies of gradient maps from the neural network model with a Discrete Cosine Transform (DCT). We show in a range of experiments that F-EP provides more spatiotemporally consistent explanations that more faithfully represent the model's decisions compared to the existing state-of-the-art methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2592754756 |
source | Free E- Journals |
subjects | Decisions Deep learning Discrete cosine transform Frequency modulation Machine learning Neural networks Perturbation methods Video data |
title | Gradient Frequency Modulation for Visually Explaining Video Understanding Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T14%3A26%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Gradient%20Frequency%20Modulation%20for%20Visually%20Explaining%20Video%20Understanding%20Models&rft.jtitle=arXiv.org&rft.au=Lin,%20Xinmiao&rft.date=2021-11-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2592754756%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2592754756&rft_id=info:pmid/&rfr_iscdi=true |