Text generation method, device and system in low-resource scene
The invention provides a text generation method, device and system in a low-resource scene, and the method comprises the steps: 1, inputting a small number of supervised training samples for a supervised network, inputting a large number of unsupervised training samples for an unsupervised network,...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | DENG TING JIANG WEIFENG LIU JUNNAN TAI ZHENYING LI JIANXIN MAO QIANREN |
description | The invention provides a text generation method, device and system in a low-resource scene, and the method comprises the steps: 1, inputting a small number of supervised training samples for a supervised network, inputting a large number of unsupervised training samples for an unsupervised network, copying two unsupervised documents, and carrying out the dropout of the embedded vectors of the unsupervised documents, so as to obtain two groups of embedded vectors; 2, generating a small neural network of a network parallel integration adapter for the large pre-training text, and forming a pre-training learning assembly based on adapter fine tuning; and step 3, performing consistent learning on the unsupervised network by adopting an adapter-based fine-tuning pre-training learning component for the supervised network and the unsupervised network, performing training and optimization of a text generation model in combination with supervised learning of the supervised network, and performing prediction by using th |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN114611472A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN114611472A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN114611472A3</originalsourceid><addsrcrecordid>eNrjZLAPSa0oUUhPzUstSizJzM9TyE0tychP0VFISS3LTE5VSMxLUSiuLC5JzVXIzFPIyS_XLUotzi8tAkoVJwN18TCwpiXmFKfyQmluBkU31xBnD93Ugvz41OKCRJCiknhnP0NDEzMgNjdyNCZGDQB46TCu</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Text generation method, device and system in low-resource scene</title><source>esp@cenet</source><creator>DENG TING ; JIANG WEIFENG ; LIU JUNNAN ; TAI ZHENYING ; LI JIANXIN ; MAO QIANREN</creator><creatorcontrib>DENG TING ; JIANG WEIFENG ; LIU JUNNAN ; TAI ZHENYING ; LI JIANXIN ; MAO QIANREN</creatorcontrib><description>The invention provides a text generation method, device and system in a low-resource scene, and the method comprises the steps: 1, inputting a small number of supervised training samples for a supervised network, inputting a large number of unsupervised training samples for an unsupervised network, copying two unsupervised documents, and carrying out the dropout of the embedded vectors of the unsupervised documents, so as to obtain two groups of embedded vectors; 2, generating a small neural network of a network parallel integration adapter for the large pre-training text, and forming a pre-training learning assembly based on adapter fine tuning; and step 3, performing consistent learning on the unsupervised network by adopting an adapter-based fine-tuning pre-training learning component for the supervised network and the unsupervised network, performing training and optimization of a text generation model in combination with supervised learning of the supervised network, and performing prediction by using th</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220610&DB=EPODOC&CC=CN&NR=114611472A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20220610&DB=EPODOC&CC=CN&NR=114611472A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>DENG TING</creatorcontrib><creatorcontrib>JIANG WEIFENG</creatorcontrib><creatorcontrib>LIU JUNNAN</creatorcontrib><creatorcontrib>TAI ZHENYING</creatorcontrib><creatorcontrib>LI JIANXIN</creatorcontrib><creatorcontrib>MAO QIANREN</creatorcontrib><title>Text generation method, device and system in low-resource scene</title><description>The invention provides a text generation method, device and system in a low-resource scene, and the method comprises the steps: 1, inputting a small number of supervised training samples for a supervised network, inputting a large number of unsupervised training samples for an unsupervised network, copying two unsupervised documents, and carrying out the dropout of the embedded vectors of the unsupervised documents, so as to obtain two groups of embedded vectors; 2, generating a small neural network of a network parallel integration adapter for the large pre-training text, and forming a pre-training learning assembly based on adapter fine tuning; and step 3, performing consistent learning on the unsupervised network by adopting an adapter-based fine-tuning pre-training learning component for the supervised network and the unsupervised network, performing training and optimization of a text generation model in combination with supervised learning of the supervised network, and performing prediction by using th</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLAPSa0oUUhPzUstSizJzM9TyE0tychP0VFISS3LTE5VSMxLUSiuLC5JzVXIzFPIyS_XLUotzi8tAkoVJwN18TCwpiXmFKfyQmluBkU31xBnD93Ugvz41OKCRJCiknhnP0NDEzMgNjdyNCZGDQB46TCu</recordid><startdate>20220610</startdate><enddate>20220610</enddate><creator>DENG TING</creator><creator>JIANG WEIFENG</creator><creator>LIU JUNNAN</creator><creator>TAI ZHENYING</creator><creator>LI JIANXIN</creator><creator>MAO QIANREN</creator><scope>EVB</scope></search><sort><creationdate>20220610</creationdate><title>Text generation method, device and system in low-resource scene</title><author>DENG TING ; JIANG WEIFENG ; LIU JUNNAN ; TAI ZHENYING ; LI JIANXIN ; MAO QIANREN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN114611472A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>DENG TING</creatorcontrib><creatorcontrib>JIANG WEIFENG</creatorcontrib><creatorcontrib>LIU JUNNAN</creatorcontrib><creatorcontrib>TAI ZHENYING</creatorcontrib><creatorcontrib>LI JIANXIN</creatorcontrib><creatorcontrib>MAO QIANREN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>DENG TING</au><au>JIANG WEIFENG</au><au>LIU JUNNAN</au><au>TAI ZHENYING</au><au>LI JIANXIN</au><au>MAO QIANREN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Text generation method, device and system in low-resource scene</title><date>2022-06-10</date><risdate>2022</risdate><abstract>The invention provides a text generation method, device and system in a low-resource scene, and the method comprises the steps: 1, inputting a small number of supervised training samples for a supervised network, inputting a large number of unsupervised training samples for an unsupervised network, copying two unsupervised documents, and carrying out the dropout of the embedded vectors of the unsupervised documents, so as to obtain two groups of embedded vectors; 2, generating a small neural network of a network parallel integration adapter for the large pre-training text, and forming a pre-training learning assembly based on adapter fine tuning; and step 3, performing consistent learning on the unsupervised network by adopting an adapter-based fine-tuning pre-training learning component for the supervised network and the unsupervised network, performing training and optimization of a text generation model in combination with supervised learning of the supervised network, and performing prediction by using th</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN114611472A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING ELECTRIC DIGITAL DATA PROCESSING PHYSICS |
title | Text generation method, device and system in low-resource scene |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T20%3A12%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=DENG%20TING&rft.date=2022-06-10&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN114611472A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |