Systems and methods for analysis explainability
Methods and systems for providing mechanisms for presenting artificial intelligence (AI) explainability metrics associated with model-based results are provided. In embodiments, a model is applied to a source document to generate a summary. An attention score is determined for each token of a plural...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | HRISTOZOVA, Nina Stamenova MULDER, Andrew Timothy SKYLAKI, Stavroula NORKUTE, Milda HERGER, Nadja GIOFRÉ, Daniele MICHALAK, Leszek |
description | Methods and systems for providing mechanisms for presenting artificial intelligence (AI) explainability metrics associated with model-based results are provided. In embodiments, a model is applied to a source document to generate a summary. An attention score is determined for each token of a plurality of tokens of the source document. The attention score for a token indicates a level of relevance of the token to the model-based summary. The tokens are aligned to at least one word of a plurality of words included in the source document, and the attention scores of the tokens aligned to the each word are combined to generate an overall attention score for each word of the source document. At least one word of the source document is displayed with an indication of the overall attention score associated with the at least one word. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_AU2021346958A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AU2021346958A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_AU2021346958A13</originalsourceid><addsrcrecordid>eNrjZNAPriwuSc0tVkjMS1HITS3JyE8pVkjLLwLyE3MqizOLFVIrCnISM_MSkzJzMksqeRhY0xJzilN5oTQ3g7Kba4izh25qQX58anFBYnJqXmpJvGOokYGRobGJmaWphaOhMXGqAAWGLD8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Systems and methods for analysis explainability</title><source>esp@cenet</source><creator>HRISTOZOVA, Nina Stamenova ; MULDER, Andrew Timothy ; SKYLAKI, Stavroula ; NORKUTE, Milda ; HERGER, Nadja ; GIOFRÉ, Daniele ; MICHALAK, Leszek</creator><creatorcontrib>HRISTOZOVA, Nina Stamenova ; MULDER, Andrew Timothy ; SKYLAKI, Stavroula ; NORKUTE, Milda ; HERGER, Nadja ; GIOFRÉ, Daniele ; MICHALAK, Leszek</creatorcontrib><description>Methods and systems for providing mechanisms for presenting artificial intelligence (AI) explainability metrics associated with model-based results are provided. In embodiments, a model is applied to a source document to generate a summary. An attention score is determined for each token of a plurality of tokens of the source document. The attention score for a token indicates a level of relevance of the token to the model-based summary. The tokens are aligned to at least one word of a plurality of words included in the source document, and the attention scores of the tokens aligned to the each word are combined to generate an overall attention score for each word of the source document. At least one word of the source document is displayed with an indication of the overall attention score associated with the at least one word.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230316&DB=EPODOC&CC=AU&NR=2021346958A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230316&DB=EPODOC&CC=AU&NR=2021346958A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>HRISTOZOVA, Nina Stamenova</creatorcontrib><creatorcontrib>MULDER, Andrew Timothy</creatorcontrib><creatorcontrib>SKYLAKI, Stavroula</creatorcontrib><creatorcontrib>NORKUTE, Milda</creatorcontrib><creatorcontrib>HERGER, Nadja</creatorcontrib><creatorcontrib>GIOFRÉ, Daniele</creatorcontrib><creatorcontrib>MICHALAK, Leszek</creatorcontrib><title>Systems and methods for analysis explainability</title><description>Methods and systems for providing mechanisms for presenting artificial intelligence (AI) explainability metrics associated with model-based results are provided. In embodiments, a model is applied to a source document to generate a summary. An attention score is determined for each token of a plurality of tokens of the source document. The attention score for a token indicates a level of relevance of the token to the model-based summary. The tokens are aligned to at least one word of a plurality of words included in the source document, and the attention scores of the tokens aligned to the each word are combined to generate an overall attention score for each word of the source document. At least one word of the source document is displayed with an indication of the overall attention score associated with the at least one word.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZNAPriwuSc0tVkjMS1HITS3JyE8pVkjLLwLyE3MqizOLFVIrCnISM_MSkzJzMksqeRhY0xJzilN5oTQ3g7Kba4izh25qQX58anFBYnJqXmpJvGOokYGRobGJmaWphaOhMXGqAAWGLD8</recordid><startdate>20230316</startdate><enddate>20230316</enddate><creator>HRISTOZOVA, Nina Stamenova</creator><creator>MULDER, Andrew Timothy</creator><creator>SKYLAKI, Stavroula</creator><creator>NORKUTE, Milda</creator><creator>HERGER, Nadja</creator><creator>GIOFRÉ, Daniele</creator><creator>MICHALAK, Leszek</creator><scope>EVB</scope></search><sort><creationdate>20230316</creationdate><title>Systems and methods for analysis explainability</title><author>HRISTOZOVA, Nina Stamenova ; MULDER, Andrew Timothy ; SKYLAKI, Stavroula ; NORKUTE, Milda ; HERGER, Nadja ; GIOFRÉ, Daniele ; MICHALAK, Leszek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_AU2021346958A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>HRISTOZOVA, Nina Stamenova</creatorcontrib><creatorcontrib>MULDER, Andrew Timothy</creatorcontrib><creatorcontrib>SKYLAKI, Stavroula</creatorcontrib><creatorcontrib>NORKUTE, Milda</creatorcontrib><creatorcontrib>HERGER, Nadja</creatorcontrib><creatorcontrib>GIOFRÉ, Daniele</creatorcontrib><creatorcontrib>MICHALAK, Leszek</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>HRISTOZOVA, Nina Stamenova</au><au>MULDER, Andrew Timothy</au><au>SKYLAKI, Stavroula</au><au>NORKUTE, Milda</au><au>HERGER, Nadja</au><au>GIOFRÉ, Daniele</au><au>MICHALAK, Leszek</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Systems and methods for analysis explainability</title><date>2023-03-16</date><risdate>2023</risdate><abstract>Methods and systems for providing mechanisms for presenting artificial intelligence (AI) explainability metrics associated with model-based results are provided. In embodiments, a model is applied to a source document to generate a summary. An attention score is determined for each token of a plurality of tokens of the source document. The attention score for a token indicates a level of relevance of the token to the model-based summary. The tokens are aligned to at least one word of a plurality of words included in the source document, and the attention scores of the tokens aligned to the each word are combined to generate an overall attention score for each word of the source document. At least one word of the source document is displayed with an indication of the overall attention score associated with the at least one word.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_AU2021346958A1 |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING PHYSICS |
title | Systems and methods for analysis explainability |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T18%3A49%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=HRISTOZOVA,%20Nina%20Stamenova&rft.date=2023-03-16&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EAU2021346958A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |