Method, device and equipment for training text graph model and medium

The invention provides a text graph model training method and device, equipment and a medium. The method comprises the following steps: acquiring a plurality of fine-grained labels of a to-be-processed image; inputting the to-be-processed image and the plurality of fine-grained labels into a first p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: CHEN YUFEI, SHANG WENXIANG, XU LIWU, LIN RAN, MENG CHUANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator CHEN YUFEI
SHANG WENXIANG
XU LIWU
LIN RAN
MENG CHUANG
description The invention provides a text graph model training method and device, equipment and a medium. The method comprises the following steps: acquiring a plurality of fine-grained labels of a to-be-processed image; inputting the to-be-processed image and the plurality of fine-grained labels into a first pre-training model to obtain an image description, which refers to REC (Registered Establishment) and RES (Registered Segmentation); inputting the plurality of fine-grained labels, the image description, the REC and the RES into a second pre-training model to obtain a description text; and generating a training sample according to the description text and the to-be-processed image, wherein the training sample is used for training a text graph model. By combining a plurality of fine-grained labels, image description, REC and RES, image-level, local and pixel-level understanding is provided for the model, and the understanding of the model on the spatial relationship between objects and the relationship between the ob
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118552761A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118552761A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118552761A3</originalsourceid><addsrcrecordid>eNrjZHD1TS3JyE_RUUhJLctMTlVIzEtRSC0szSzITc0rUUjLL1IoKUrMzMvMS1coSa0oUUgvSizIUMjNT0nNAavNTU3JLM3lYWBNS8wpTuWF0twMim6uIc4euqkF-fGpxQWJyal5qSXxzn6GhhampkbmZoaOxsSoAQCzBTK7</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Method, device and equipment for training text graph model and medium</title><source>esp@cenet</source><creator>CHEN YUFEI ; SHANG WENXIANG ; XU LIWU ; LIN RAN ; MENG CHUANG</creator><creatorcontrib>CHEN YUFEI ; SHANG WENXIANG ; XU LIWU ; LIN RAN ; MENG CHUANG</creatorcontrib><description>The invention provides a text graph model training method and device, equipment and a medium. The method comprises the following steps: acquiring a plurality of fine-grained labels of a to-be-processed image; inputting the to-be-processed image and the plurality of fine-grained labels into a first pre-training model to obtain an image description, which refers to REC (Registered Establishment) and RES (Registered Segmentation); inputting the plurality of fine-grained labels, the image description, the REC and the RES into a second pre-training model to obtain a description text; and generating a training sample according to the description text and the to-be-processed image, wherein the training sample is used for training a text graph model. By combining a plurality of fine-grained labels, image description, REC and RES, image-level, local and pixel-level understanding is provided for the model, and the understanding of the model on the spatial relationship between objects and the relationship between the ob</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240827&amp;DB=EPODOC&amp;CC=CN&amp;NR=118552761A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240827&amp;DB=EPODOC&amp;CC=CN&amp;NR=118552761A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHEN YUFEI</creatorcontrib><creatorcontrib>SHANG WENXIANG</creatorcontrib><creatorcontrib>XU LIWU</creatorcontrib><creatorcontrib>LIN RAN</creatorcontrib><creatorcontrib>MENG CHUANG</creatorcontrib><title>Method, device and equipment for training text graph model and medium</title><description>The invention provides a text graph model training method and device, equipment and a medium. The method comprises the following steps: acquiring a plurality of fine-grained labels of a to-be-processed image; inputting the to-be-processed image and the plurality of fine-grained labels into a first pre-training model to obtain an image description, which refers to REC (Registered Establishment) and RES (Registered Segmentation); inputting the plurality of fine-grained labels, the image description, the REC and the RES into a second pre-training model to obtain a description text; and generating a training sample according to the description text and the to-be-processed image, wherein the training sample is used for training a text graph model. By combining a plurality of fine-grained labels, image description, REC and RES, image-level, local and pixel-level understanding is provided for the model, and the understanding of the model on the spatial relationship between objects and the relationship between the ob</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHD1TS3JyE_RUUhJLctMTlVIzEtRSC0szSzITc0rUUjLL1IoKUrMzMvMS1coSa0oUUgvSizIUMjNT0nNAavNTU3JLM3lYWBNS8wpTuWF0twMim6uIc4euqkF-fGpxQWJyal5qSXxzn6GhhampkbmZoaOxsSoAQCzBTK7</recordid><startdate>20240827</startdate><enddate>20240827</enddate><creator>CHEN YUFEI</creator><creator>SHANG WENXIANG</creator><creator>XU LIWU</creator><creator>LIN RAN</creator><creator>MENG CHUANG</creator><scope>EVB</scope></search><sort><creationdate>20240827</creationdate><title>Method, device and equipment for training text graph model and medium</title><author>CHEN YUFEI ; SHANG WENXIANG ; XU LIWU ; LIN RAN ; MENG CHUANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118552761A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>CHEN YUFEI</creatorcontrib><creatorcontrib>SHANG WENXIANG</creatorcontrib><creatorcontrib>XU LIWU</creatorcontrib><creatorcontrib>LIN RAN</creatorcontrib><creatorcontrib>MENG CHUANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHEN YUFEI</au><au>SHANG WENXIANG</au><au>XU LIWU</au><au>LIN RAN</au><au>MENG CHUANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Method, device and equipment for training text graph model and medium</title><date>2024-08-27</date><risdate>2024</risdate><abstract>The invention provides a text graph model training method and device, equipment and a medium. The method comprises the following steps: acquiring a plurality of fine-grained labels of a to-be-processed image; inputting the to-be-processed image and the plurality of fine-grained labels into a first pre-training model to obtain an image description, which refers to REC (Registered Establishment) and RES (Registered Segmentation); inputting the plurality of fine-grained labels, the image description, the REC and the RES into a second pre-training model to obtain a description text; and generating a training sample according to the description text and the to-be-processed image, wherein the training sample is used for training a text graph model. By combining a plurality of fine-grained labels, image description, REC and RES, image-level, local and pixel-level understanding is provided for the model, and the understanding of the model on the spatial relationship between objects and the relationship between the ob</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118552761A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title Method, device and equipment for training text graph model and medium
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T02%3A18%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHEN%20YUFEI&rft.date=2024-08-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118552761A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true