Semantic-based cross-modal knowledge association method and device, equipment and storage medium

The invention relates to the technical field of multi-modal data application, and discloses a cross-modal knowledge association method and device based on semantics, equipment and a storage medium. The method comprises the following steps: acquiring cross-modal knowledge and language description cor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WANG WEI, HE ZHAOMING, ZHANG HEMING, BI HAI, KE LIANBAO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WANG WEI
HE ZHAOMING
ZHANG HEMING
BI HAI
KE LIANBAO
description The invention relates to the technical field of multi-modal data application, and discloses a cross-modal knowledge association method and device based on semantics, equipment and a storage medium. The method comprises the following steps: acquiring cross-modal knowledge and language description corresponding to the cross-modal knowledge as a semantic cross-modal data set; a cross-modal knowledge association model is constructed, the cross-modal knowledge association model comprises a semantic mask sub-model and a fusion transformation factor generation sub-model, and the semantic mask sub-model is used for performing mask processing on a semantic cross-modal data set to generate partial data most related to language description; the fusion transformation factor generation sub-model is used for processing the semantic cross-modal data set to generate transformation factors; and inputting the semantic cross-modal data set into the trained cross-modal knowledge association model, and outputting a cross-modal kn
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118153692A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118153692A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118153692A3</originalsourceid><addsrcrecordid>eNqNizEPAUEQRq9RCP7D6F1xhFDKhag09GfsfNi43Tk3e_x9In6A6iUv7_Wz0wGBY_IuP7NByLVqlgcVruke9VVDriA2U-c5eY0UkG4qxFFI8PQOE8Kj801ATF9rSVv-TAHiuzDMeheuDaMfB9l4uzmWuxyNVrCGHSJSVe6LYlnMZ4vVdD37p3kD4P89gQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Semantic-based cross-modal knowledge association method and device, equipment and storage medium</title><source>esp@cenet</source><creator>WANG WEI ; HE ZHAOMING ; ZHANG HEMING ; BI HAI ; KE LIANBAO</creator><creatorcontrib>WANG WEI ; HE ZHAOMING ; ZHANG HEMING ; BI HAI ; KE LIANBAO</creatorcontrib><description>The invention relates to the technical field of multi-modal data application, and discloses a cross-modal knowledge association method and device based on semantics, equipment and a storage medium. The method comprises the following steps: acquiring cross-modal knowledge and language description corresponding to the cross-modal knowledge as a semantic cross-modal data set; a cross-modal knowledge association model is constructed, the cross-modal knowledge association model comprises a semantic mask sub-model and a fusion transformation factor generation sub-model, and the semantic mask sub-model is used for performing mask processing on a semantic cross-modal data set to generate partial data most related to language description; the fusion transformation factor generation sub-model is used for processing the semantic cross-modal data set to generate transformation factors; and inputting the semantic cross-modal data set into the trained cross-modal knowledge association model, and outputting a cross-modal kn</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240607&amp;DB=EPODOC&amp;CC=CN&amp;NR=118153692A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76318</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240607&amp;DB=EPODOC&amp;CC=CN&amp;NR=118153692A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WANG WEI</creatorcontrib><creatorcontrib>HE ZHAOMING</creatorcontrib><creatorcontrib>ZHANG HEMING</creatorcontrib><creatorcontrib>BI HAI</creatorcontrib><creatorcontrib>KE LIANBAO</creatorcontrib><title>Semantic-based cross-modal knowledge association method and device, equipment and storage medium</title><description>The invention relates to the technical field of multi-modal data application, and discloses a cross-modal knowledge association method and device based on semantics, equipment and a storage medium. The method comprises the following steps: acquiring cross-modal knowledge and language description corresponding to the cross-modal knowledge as a semantic cross-modal data set; a cross-modal knowledge association model is constructed, the cross-modal knowledge association model comprises a semantic mask sub-model and a fusion transformation factor generation sub-model, and the semantic mask sub-model is used for performing mask processing on a semantic cross-modal data set to generate partial data most related to language description; the fusion transformation factor generation sub-model is used for processing the semantic cross-modal data set to generate transformation factors; and inputting the semantic cross-modal data set into the trained cross-modal knowledge association model, and outputting a cross-modal kn</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNizEPAUEQRq9RCP7D6F1xhFDKhag09GfsfNi43Tk3e_x9In6A6iUv7_Wz0wGBY_IuP7NByLVqlgcVruke9VVDriA2U-c5eY0UkG4qxFFI8PQOE8Kj801ATF9rSVv-TAHiuzDMeheuDaMfB9l4uzmWuxyNVrCGHSJSVe6LYlnMZ4vVdD37p3kD4P89gQ</recordid><startdate>20240607</startdate><enddate>20240607</enddate><creator>WANG WEI</creator><creator>HE ZHAOMING</creator><creator>ZHANG HEMING</creator><creator>BI HAI</creator><creator>KE LIANBAO</creator><scope>EVB</scope></search><sort><creationdate>20240607</creationdate><title>Semantic-based cross-modal knowledge association method and device, equipment and storage medium</title><author>WANG WEI ; HE ZHAOMING ; ZHANG HEMING ; BI HAI ; KE LIANBAO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118153692A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>WANG WEI</creatorcontrib><creatorcontrib>HE ZHAOMING</creatorcontrib><creatorcontrib>ZHANG HEMING</creatorcontrib><creatorcontrib>BI HAI</creatorcontrib><creatorcontrib>KE LIANBAO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WANG WEI</au><au>HE ZHAOMING</au><au>ZHANG HEMING</au><au>BI HAI</au><au>KE LIANBAO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Semantic-based cross-modal knowledge association method and device, equipment and storage medium</title><date>2024-06-07</date><risdate>2024</risdate><abstract>The invention relates to the technical field of multi-modal data application, and discloses a cross-modal knowledge association method and device based on semantics, equipment and a storage medium. The method comprises the following steps: acquiring cross-modal knowledge and language description corresponding to the cross-modal knowledge as a semantic cross-modal data set; a cross-modal knowledge association model is constructed, the cross-modal knowledge association model comprises a semantic mask sub-model and a fusion transformation factor generation sub-model, and the semantic mask sub-model is used for performing mask processing on a semantic cross-modal data set to generate partial data most related to language description; the fusion transformation factor generation sub-model is used for processing the semantic cross-modal data set to generate transformation factors; and inputting the semantic cross-modal data set into the trained cross-modal knowledge association model, and outputting a cross-modal kn</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118153692A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
PHYSICS
title Semantic-based cross-modal knowledge association method and device, equipment and storage medium
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T04%3A00%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WANG%20WEI&rft.date=2024-06-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118153692A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true