A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT
We present a novel supervised word alignment method based on cross-language span prediction. We first formalize a word alignment problem as a collection of independent predictions from a token in the source sentence to a span in the target sentence. As this is equivalent to a SQuAD v2.0 style questi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Nagata, Masaaki Katsuki, Chousa Nishino, Masaaki |
description | We present a novel supervised word alignment method based on cross-language
span prediction. We first formalize a word alignment problem as a collection of
independent predictions from a token in the source sentence to a span in the
target sentence. As this is equivalent to a SQuAD v2.0 style question answering
task, we then solve this problem by using multilingual BERT, which is
fine-tuned on a manually created gold word alignment data. We greatly improved
the word alignment accuracy by adding the context of the token to the question.
In the experiments using five word alignment datasets among Chinese, Japanese,
German, Romanian, French, and English, we show that the proposed method
significantly outperformed previous supervised and unsupervised word alignment
methods without using any bitexts for pretraining. For example, we achieved an
F1 score of 86.7 for the Chinese-English data, which is 13.3 points higher than
the previous state-of-the-art supervised methods. |
doi_str_mv | 10.48550/arxiv.2004.14516 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2004_14516</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2004_14516</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-8f90b115bd60506eaae031760b722dffdbbc88f0689bc1c3048f936a1f5d0c1a3</originalsourceid><addsrcrecordid>eNotj81OhDAUhbtxYUYfwJV9AfCW0sIskYw_CRONQ-KS3NIWmzCFFJjo2zuMrs7iO-ckHyF3DOI0FwIeMHy7U5wApDFLBZPXRBX0sIwmnNxkNP0cgqZF7zp_NH6mezN_DZoqXNngaRmGaYoq9N2CnaGHET19D0a7dnZnvEzOd3S_9LPr3drp6ePuo74hVxb7ydz-54bUT7u6fImqt-fXsqgilJmMcrsFxZhQWoIAaRANcJZJUFmSaGu1Um2eW5D5VrWs5ZCeF1wis0JDy5BvyP3f7UWyGYM7YvhpVtnmIst_ATHpT9E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT</title><source>arXiv.org</source><creator>Nagata, Masaaki ; Katsuki, Chousa ; Nishino, Masaaki</creator><creatorcontrib>Nagata, Masaaki ; Katsuki, Chousa ; Nishino, Masaaki</creatorcontrib><description>We present a novel supervised word alignment method based on cross-language
span prediction. We first formalize a word alignment problem as a collection of
independent predictions from a token in the source sentence to a span in the
target sentence. As this is equivalent to a SQuAD v2.0 style question answering
task, we then solve this problem by using multilingual BERT, which is
fine-tuned on a manually created gold word alignment data. We greatly improved
the word alignment accuracy by adding the context of the token to the question.
In the experiments using five word alignment datasets among Chinese, Japanese,
German, Romanian, French, and English, we show that the proposed method
significantly outperformed previous supervised and unsupervised word alignment
methods without using any bitexts for pretraining. For example, we achieved an
F1 score of 86.7 for the Chinese-English data, which is 13.3 points higher than
the previous state-of-the-art supervised methods.</description><identifier>DOI: 10.48550/arxiv.2004.14516</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2020-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2004.14516$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2004.14516$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nagata, Masaaki</creatorcontrib><creatorcontrib>Katsuki, Chousa</creatorcontrib><creatorcontrib>Nishino, Masaaki</creatorcontrib><title>A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT</title><description>We present a novel supervised word alignment method based on cross-language
span prediction. We first formalize a word alignment problem as a collection of
independent predictions from a token in the source sentence to a span in the
target sentence. As this is equivalent to a SQuAD v2.0 style question answering
task, we then solve this problem by using multilingual BERT, which is
fine-tuned on a manually created gold word alignment data. We greatly improved
the word alignment accuracy by adding the context of the token to the question.
In the experiments using five word alignment datasets among Chinese, Japanese,
German, Romanian, French, and English, we show that the proposed method
significantly outperformed previous supervised and unsupervised word alignment
methods without using any bitexts for pretraining. For example, we achieved an
F1 score of 86.7 for the Chinese-English data, which is 13.3 points higher than
the previous state-of-the-art supervised methods.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhDAUhbtxYUYfwJV9AfCW0sIskYw_CRONQ-KS3NIWmzCFFJjo2zuMrs7iO-ckHyF3DOI0FwIeMHy7U5wApDFLBZPXRBX0sIwmnNxkNP0cgqZF7zp_NH6mezN_DZoqXNngaRmGaYoq9N2CnaGHET19D0a7dnZnvEzOd3S_9LPr3drp6ePuo74hVxb7ydz-54bUT7u6fImqt-fXsqgilJmMcrsFxZhQWoIAaRANcJZJUFmSaGu1Um2eW5D5VrWs5ZCeF1wis0JDy5BvyP3f7UWyGYM7YvhpVtnmIst_ATHpT9E</recordid><startdate>20200429</startdate><enddate>20200429</enddate><creator>Nagata, Masaaki</creator><creator>Katsuki, Chousa</creator><creator>Nishino, Masaaki</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200429</creationdate><title>A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT</title><author>Nagata, Masaaki ; Katsuki, Chousa ; Nishino, Masaaki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-8f90b115bd60506eaae031760b722dffdbbc88f0689bc1c3048f936a1f5d0c1a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Nagata, Masaaki</creatorcontrib><creatorcontrib>Katsuki, Chousa</creatorcontrib><creatorcontrib>Nishino, Masaaki</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nagata, Masaaki</au><au>Katsuki, Chousa</au><au>Nishino, Masaaki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT</atitle><date>2020-04-29</date><risdate>2020</risdate><abstract>We present a novel supervised word alignment method based on cross-language
span prediction. We first formalize a word alignment problem as a collection of
independent predictions from a token in the source sentence to a span in the
target sentence. As this is equivalent to a SQuAD v2.0 style question answering
task, we then solve this problem by using multilingual BERT, which is
fine-tuned on a manually created gold word alignment data. We greatly improved
the word alignment accuracy by adding the context of the token to the question.
In the experiments using five word alignment datasets among Chinese, Japanese,
German, Romanian, French, and English, we show that the proposed method
significantly outperformed previous supervised and unsupervised word alignment
methods without using any bitexts for pretraining. For example, we achieved an
F1 score of 86.7 for the Chinese-English data, which is 13.3 points higher than
the previous state-of-the-art supervised methods.</abstract><doi>10.48550/arxiv.2004.14516</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2004.14516 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2004_14516 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | A Supervised Word Alignment Method based on Cross-Language Span Prediction using Multilingual BERT |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T02%3A54%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Supervised%20Word%20Alignment%20Method%20based%20on%20Cross-Language%20Span%20Prediction%20using%20Multilingual%20BERT&rft.au=Nagata,%20Masaaki&rft.date=2020-04-29&rft_id=info:doi/10.48550/arxiv.2004.14516&rft_dat=%3Carxiv_GOX%3E2004_14516%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |