Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop th...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2021-11, Vol.10 (21), p.2660 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 21 |
container_start_page | 2660 |
container_title | Electronics (Basel) |
container_volume | 10 |
creator | Marcondes, Francisco S. Durães, Dalila Santos, Flávio Almeida, José João Novais, Paulo |
description | This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field. |
doi_str_mv | 10.3390/electronics10212660 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2596018513</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2596018513</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-86c3a084acc2abdf566b8986f743d6e95efa68605d701df4b6be9e2c27027a2e3</originalsourceid><addsrcrecordid>eNptUMtOwzAQtBBIVKVfwMUS54AfiWNzC1WBSlXhAOdo42yklGAHOxX07zEqBw7sZWZXs6PREHLJ2bWUht3ggHYK3vU2cia4UIqdkJlgpcmMMOL0Dz8nixh3LI3hUks2I5st7gMMdIvTpw9vdPU1DtA7aAak1ZreQcSWekefIYD1LvZxQjfRysFwSMttYukn3WLv3QU562CIuPjFOXm9X70sH7PN08N6WW0yK_NyyrSyEpjOwVoBTdsVSjXaaNWVuWwVmgI7UFqxoi0Zb7u8UQ0aFFaUTJQgUM7J1dF3DP5jj3Gqd34fUqRYi8IoxnXBZVLJo8oGH2PArh5D_w7hUHNW_zRX_9Oc_AYQG2Q7</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2596018513</pqid></control><display><type>article</type><title>Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Marcondes, Francisco S. ; Durães, Dalila ; Santos, Flávio ; Almeida, José João ; Novais, Paulo</creator><creatorcontrib>Marcondes, Francisco S. ; Durães, Dalila ; Santos, Flávio ; Almeida, José João ; Novais, Paulo</creatorcontrib><description>This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics10212660</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Annotations ; Artificial intelligence ; Audio data ; Classification ; Datasets ; Deep learning ; Explainable artificial intelligence ; Explosions ; Logic ; Machine learning ; Neural networks</subject><ispartof>Electronics (Basel), 2021-11, Vol.10 (21), p.2660</ispartof><rights>2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-86c3a084acc2abdf566b8986f743d6e95efa68605d701df4b6be9e2c27027a2e3</citedby><cites>FETCH-LOGICAL-c347t-86c3a084acc2abdf566b8986f743d6e95efa68605d701df4b6be9e2c27027a2e3</cites><orcidid>0000-0002-8313-7023 ; 0000-0002-3549-0754 ; 0000-0002-2221-2261 ; 0000-0003-2378-5376 ; 0000-0002-0722-2031</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Marcondes, Francisco S.</creatorcontrib><creatorcontrib>Durães, Dalila</creatorcontrib><creatorcontrib>Santos, Flávio</creatorcontrib><creatorcontrib>Almeida, José João</creatorcontrib><creatorcontrib>Novais, Paulo</creatorcontrib><title>Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension</title><title>Electronics (Basel)</title><description>This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.</description><subject>Annotations</subject><subject>Artificial intelligence</subject><subject>Audio data</subject><subject>Classification</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Explainable artificial intelligence</subject><subject>Explosions</subject><subject>Logic</subject><subject>Machine learning</subject><subject>Neural networks</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptUMtOwzAQtBBIVKVfwMUS54AfiWNzC1WBSlXhAOdo42yklGAHOxX07zEqBw7sZWZXs6PREHLJ2bWUht3ggHYK3vU2cia4UIqdkJlgpcmMMOL0Dz8nixh3LI3hUks2I5st7gMMdIvTpw9vdPU1DtA7aAak1ZreQcSWekefIYD1LvZxQjfRysFwSMttYukn3WLv3QU562CIuPjFOXm9X70sH7PN08N6WW0yK_NyyrSyEpjOwVoBTdsVSjXaaNWVuWwVmgI7UFqxoi0Zb7u8UQ0aFFaUTJQgUM7J1dF3DP5jj3Gqd34fUqRYi8IoxnXBZVLJo8oGH2PArh5D_w7hUHNW_zRX_9Oc_AYQG2Q7</recordid><startdate>20211101</startdate><enddate>20211101</enddate><creator>Marcondes, Francisco S.</creator><creator>Durães, Dalila</creator><creator>Santos, Flávio</creator><creator>Almeida, José João</creator><creator>Novais, Paulo</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-8313-7023</orcidid><orcidid>https://orcid.org/0000-0002-3549-0754</orcidid><orcidid>https://orcid.org/0000-0002-2221-2261</orcidid><orcidid>https://orcid.org/0000-0003-2378-5376</orcidid><orcidid>https://orcid.org/0000-0002-0722-2031</orcidid></search><sort><creationdate>20211101</creationdate><title>Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension</title><author>Marcondes, Francisco S. ; Durães, Dalila ; Santos, Flávio ; Almeida, José João ; Novais, Paulo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-86c3a084acc2abdf566b8986f743d6e95efa68605d701df4b6be9e2c27027a2e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Annotations</topic><topic>Artificial intelligence</topic><topic>Audio data</topic><topic>Classification</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Explainable artificial intelligence</topic><topic>Explosions</topic><topic>Logic</topic><topic>Machine learning</topic><topic>Neural networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Marcondes, Francisco S.</creatorcontrib><creatorcontrib>Durães, Dalila</creatorcontrib><creatorcontrib>Santos, Flávio</creatorcontrib><creatorcontrib>Almeida, José João</creatorcontrib><creatorcontrib>Novais, Paulo</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Marcondes, Francisco S.</au><au>Durães, Dalila</au><au>Santos, Flávio</au><au>Almeida, José João</au><au>Novais, Paulo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension</atitle><jtitle>Electronics (Basel)</jtitle><date>2021-11-01</date><risdate>2021</risdate><volume>10</volume><issue>21</issue><spage>2660</spage><pages>2660-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop the subject in future research. The answers to these two questions are affirmative. Paraconsistent analysis provides insightful prediction visualisation through a mature formal framework that provides proper support for reasoning. The significant potential envisioned is the that paraconsistent analysis will be used for guiding neural network development projects, despite the performance issues. This paper provides two explorations. The first was a baseline experiment based on MNIST for establishing the link between paraconsistency and neural networks. The second experiment aimed to detect violence in audio files to verify whether the paraconsistent framework scales to industry level problems. The conclusion shown by this early assessment is that further research on this subject is worthful, and may eventually result in a significant contribution to the field.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics10212660</doi><orcidid>https://orcid.org/0000-0002-8313-7023</orcidid><orcidid>https://orcid.org/0000-0002-3549-0754</orcidid><orcidid>https://orcid.org/0000-0002-2221-2261</orcidid><orcidid>https://orcid.org/0000-0003-2378-5376</orcidid><orcidid>https://orcid.org/0000-0002-0722-2031</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2021-11, Vol.10 (21), p.2660 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_2596018513 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Annotations Artificial intelligence Audio data Classification Datasets Deep learning Explainable artificial intelligence Explosions Logic Machine learning Neural networks |
title | Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-20T22%3A44%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Network%20Explainable%20AI%20Based%20on%20Paraconsistent%20Analysis:%20An%20Extension&rft.jtitle=Electronics%20(Basel)&rft.au=Marcondes,%20Francisco%20S.&rft.date=2021-11-01&rft.volume=10&rft.issue=21&rft.spage=2660&rft.pages=2660-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics10212660&rft_dat=%3Cproquest_cross%3E2596018513%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2596018513&rft_id=info:pmid/&rfr_iscdi=true |