Reinforced Rewards Framework for Text Style Transfer

Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved. There has been a lot of interest in the field of text style transfer due to its wide application to tailored text generation. Exist...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sancheti, Abhilasha, Krishna, Kundan, Srinivasan, Balaji Vasan, Natarajan, Anandhavelu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sancheti, Abhilasha
Krishna, Kundan
Srinivasan, Balaji Vasan
Natarajan, Anandhavelu
description Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved. There has been a lot of interest in the field of text style transfer due to its wide application to tailored text generation. Existing works evaluate the style transfer models based on content preservation and transfer strength. In this work, we propose a reinforcement learning based framework that directly rewards the framework on these target metrics yielding a better transfer of the target style. We show the improved performance of our proposed framework based on automatic and human evaluation on three independent tasks: wherein we transfer the style of text from formal to informal, high excitement to low excitement, modern English to Shakespearean English, and vice-versa in all the three cases. Improved performance of the proposed framework over existing state-of-the-art frameworks indicates the viability of the approach.
doi_str_mv 10.48550/arxiv.2005.05256
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2005_05256</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2005_05256</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-adbb7a9cd9ef3b80aa8428215271da396a66251cbb2874f3af258bb9b791f3cb3</originalsourceid><addsrcrecordid>eNotzstKA0EQheHeuJDoA7iyX2DGvkzfliEkKgQCcfZD1XQ1DLlSE0zy9mrM6ix-OHxCvGhVN9E59QZ8Gb5ro5SrlTPOP4pmTcO-HLinLNd0Bs6jXDDs6HzgjfwNsqXLSX6drluSLcN-LMRP4qHAdqTn-05Eu5i3s49quXr_nE2XFfjgK8iIAVKfExWLUQHExkSjnQk6g00evDdO94gmhqZYKMZFxIQh6WJ7tBPx-n97Y3dHHnbA1-6P39349geFZD_H</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Reinforced Rewards Framework for Text Style Transfer</title><source>arXiv.org</source><creator>Sancheti, Abhilasha ; Krishna, Kundan ; Srinivasan, Balaji Vasan ; Natarajan, Anandhavelu</creator><creatorcontrib>Sancheti, Abhilasha ; Krishna, Kundan ; Srinivasan, Balaji Vasan ; Natarajan, Anandhavelu</creatorcontrib><description>Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved. There has been a lot of interest in the field of text style transfer due to its wide application to tailored text generation. Existing works evaluate the style transfer models based on content preservation and transfer strength. In this work, we propose a reinforcement learning based framework that directly rewards the framework on these target metrics yielding a better transfer of the target style. We show the improved performance of our proposed framework based on automatic and human evaluation on three independent tasks: wherein we transfer the style of text from formal to informal, high excitement to low excitement, modern English to Shakespearean English, and vice-versa in all the three cases. Improved performance of the proposed framework over existing state-of-the-art frameworks indicates the viability of the approach.</description><identifier>DOI: 10.48550/arxiv.2005.05256</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2020-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2005.05256$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.05256$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sancheti, Abhilasha</creatorcontrib><creatorcontrib>Krishna, Kundan</creatorcontrib><creatorcontrib>Srinivasan, Balaji Vasan</creatorcontrib><creatorcontrib>Natarajan, Anandhavelu</creatorcontrib><title>Reinforced Rewards Framework for Text Style Transfer</title><description>Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved. There has been a lot of interest in the field of text style transfer due to its wide application to tailored text generation. Existing works evaluate the style transfer models based on content preservation and transfer strength. In this work, we propose a reinforcement learning based framework that directly rewards the framework on these target metrics yielding a better transfer of the target style. We show the improved performance of our proposed framework based on automatic and human evaluation on three independent tasks: wherein we transfer the style of text from formal to informal, high excitement to low excitement, modern English to Shakespearean English, and vice-versa in all the three cases. Improved performance of the proposed framework over existing state-of-the-art frameworks indicates the viability of the approach.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzstKA0EQheHeuJDoA7iyX2DGvkzfliEkKgQCcfZD1XQ1DLlSE0zy9mrM6ix-OHxCvGhVN9E59QZ8Gb5ro5SrlTPOP4pmTcO-HLinLNd0Bs6jXDDs6HzgjfwNsqXLSX6drluSLcN-LMRP4qHAdqTn-05Eu5i3s49quXr_nE2XFfjgK8iIAVKfExWLUQHExkSjnQk6g00evDdO94gmhqZYKMZFxIQh6WJ7tBPx-n97Y3dHHnbA1-6P39349geFZD_H</recordid><startdate>20200511</startdate><enddate>20200511</enddate><creator>Sancheti, Abhilasha</creator><creator>Krishna, Kundan</creator><creator>Srinivasan, Balaji Vasan</creator><creator>Natarajan, Anandhavelu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200511</creationdate><title>Reinforced Rewards Framework for Text Style Transfer</title><author>Sancheti, Abhilasha ; Krishna, Kundan ; Srinivasan, Balaji Vasan ; Natarajan, Anandhavelu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-adbb7a9cd9ef3b80aa8428215271da396a66251cbb2874f3af258bb9b791f3cb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Sancheti, Abhilasha</creatorcontrib><creatorcontrib>Krishna, Kundan</creatorcontrib><creatorcontrib>Srinivasan, Balaji Vasan</creatorcontrib><creatorcontrib>Natarajan, Anandhavelu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sancheti, Abhilasha</au><au>Krishna, Kundan</au><au>Srinivasan, Balaji Vasan</au><au>Natarajan, Anandhavelu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Reinforced Rewards Framework for Text Style Transfer</atitle><date>2020-05-11</date><risdate>2020</risdate><abstract>Style transfer deals with the algorithms to transfer the stylistic properties of a piece of text into that of another while ensuring that the core content is preserved. There has been a lot of interest in the field of text style transfer due to its wide application to tailored text generation. Existing works evaluate the style transfer models based on content preservation and transfer strength. In this work, we propose a reinforcement learning based framework that directly rewards the framework on these target metrics yielding a better transfer of the target style. We show the improved performance of our proposed framework based on automatic and human evaluation on three independent tasks: wherein we transfer the style of text from formal to informal, high excitement to low excitement, modern English to Shakespearean English, and vice-versa in all the three cases. Improved performance of the proposed framework over existing state-of-the-art frameworks indicates the viability of the approach.</abstract><doi>10.48550/arxiv.2005.05256</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2005.05256
ispartof
issn
language eng
recordid cdi_arxiv_primary_2005_05256
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Reinforced Rewards Framework for Text Style Transfer
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T18%3A29%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Reinforced%20Rewards%20Framework%20for%20Text%20Style%20Transfer&rft.au=Sancheti,%20Abhilasha&rft.date=2020-05-11&rft_id=info:doi/10.48550/arxiv.2005.05256&rft_dat=%3Carxiv_GOX%3E2005_05256%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true