ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES

Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model usi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: SARID, Nimrod, Horesh-Yaniv, Vered, FEINSTEIN, Michael, ADATO, Yair, MOKADY, Ron, GUTFLAISH, Eyal
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator SARID, Nimrod
Horesh-Yaniv, Vered
FEINSTEIN, Michael
ADATO, Yair
MOKADY, Ron
GUTFLAISH, Eyal
description Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Properties of an aspect of the first visual content and properties of visual contents associated with the plurality of training examples may be used to attribute the aspect of the first visual content to a subgroup of the plurality of training examples. For each source of the sources associated with the visual contents associated with the training examples of the subgroup, a data-record associated with the source may be updated based on the attribution of the aspect of the first visual content.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2024273865A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2024273865A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2024273865A13</originalsourceid><addsrcrecordid>eNrjZHB1DAkJ8nQKDfH0c1dwDA5wdQ4JVvB3U3B39XMNcgxxdVEI8wwOdfRRcPb3C3H1A0qG-CuEBDl6-oE0uEY4-gb4uAbzMLCmJeYUp_JCaW4GZTfXEGcP3dSC_PjU4oLE5NS81JL40GAjAyMTI3NjCzNTR0Nj4lQBAFudLVI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES</title><source>esp@cenet</source><creator>SARID, Nimrod ; Horesh-Yaniv, Vered ; FEINSTEIN, Michael ; ADATO, Yair ; MOKADY, Ron ; GUTFLAISH, Eyal</creator><creatorcontrib>SARID, Nimrod ; Horesh-Yaniv, Vered ; FEINSTEIN, Michael ; ADATO, Yair ; MOKADY, Ron ; GUTFLAISH, Eyal</creatorcontrib><description>Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Properties of an aspect of the first visual content and properties of visual contents associated with the plurality of training examples may be used to attribute the aspect of the first visual content to a subgroup of the plurality of training examples. For each source of the sources associated with the visual contents associated with the training examples of the subgroup, a data-record associated with the source may be updated based on the attribution of the aspect of the first visual content.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240815&amp;DB=EPODOC&amp;CC=US&amp;NR=2024273865A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76418</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240815&amp;DB=EPODOC&amp;CC=US&amp;NR=2024273865A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>SARID, Nimrod</creatorcontrib><creatorcontrib>Horesh-Yaniv, Vered</creatorcontrib><creatorcontrib>FEINSTEIN, Michael</creatorcontrib><creatorcontrib>ADATO, Yair</creatorcontrib><creatorcontrib>MOKADY, Ron</creatorcontrib><creatorcontrib>GUTFLAISH, Eyal</creatorcontrib><title>ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES</title><description>Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Properties of an aspect of the first visual content and properties of visual contents associated with the plurality of training examples may be used to attribute the aspect of the first visual content to a subgroup of the plurality of training examples. For each source of the sources associated with the visual contents associated with the training examples of the subgroup, a data-record associated with the source may be updated based on the attribution of the aspect of the first visual content.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHB1DAkJ8nQKDfH0c1dwDA5wdQ4JVvB3U3B39XMNcgxxdVEI8wwOdfRRcPb3C3H1A0qG-CuEBDl6-oE0uEY4-gb4uAbzMLCmJeYUp_JCaW4GZTfXEGcP3dSC_PjU4oLE5NS81JL40GAjAyMTI3NjCzNTR0Nj4lQBAFudLVI</recordid><startdate>20240815</startdate><enddate>20240815</enddate><creator>SARID, Nimrod</creator><creator>Horesh-Yaniv, Vered</creator><creator>FEINSTEIN, Michael</creator><creator>ADATO, Yair</creator><creator>MOKADY, Ron</creator><creator>GUTFLAISH, Eyal</creator><scope>EVB</scope></search><sort><creationdate>20240815</creationdate><title>ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES</title><author>SARID, Nimrod ; Horesh-Yaniv, Vered ; FEINSTEIN, Michael ; ADATO, Yair ; MOKADY, Ron ; GUTFLAISH, Eyal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2024273865A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>SARID, Nimrod</creatorcontrib><creatorcontrib>Horesh-Yaniv, Vered</creatorcontrib><creatorcontrib>FEINSTEIN, Michael</creatorcontrib><creatorcontrib>ADATO, Yair</creatorcontrib><creatorcontrib>MOKADY, Ron</creatorcontrib><creatorcontrib>GUTFLAISH, Eyal</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>SARID, Nimrod</au><au>Horesh-Yaniv, Vered</au><au>FEINSTEIN, Michael</au><au>ADATO, Yair</au><au>MOKADY, Ron</au><au>GUTFLAISH, Eyal</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES</title><date>2024-08-15</date><risdate>2024</risdate><abstract>Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Properties of an aspect of the first visual content and properties of visual contents associated with the plurality of training examples may be used to attribute the aspect of the first visual content to a subgroup of the plurality of training examples. For each source of the sources associated with the visual contents associated with the training examples of the subgroup, a data-record associated with the source may be updated based on the attribution of the aspect of the first visual content.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2024273865A1
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
PHYSICS
title ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T08%3A17%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=SARID,%20Nimrod&rft.date=2024-08-15&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2024273865A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true