Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization

Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Dixuan, Li, Yanda, Jiang, Junyuan, Ding, Zepeng, Jiang, Guochao, Liang, Jiaqing, Yang, Deqing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Dixuan
Li, Yanda
Jiang, Junyuan
Ding, Zepeng
Jiang, Guochao
Liang, Jiaqing
Yang, Deqing
description Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevitable limitation inherent to all LLMs. In fact, incorrect tokenization is the critical point that hinders LLMs in understanding the input precisely, thus leading to unsatisfactory output. To demonstrate this flaw of LLMs, we construct an adversarial dataset, named as $\textbf{ADT (Adversarial Dataset for Tokenizer)}$, which draws upon the vocabularies of various open-source LLMs to challenge LLMs' tokenization. ADT consists of two subsets: the manually constructed ADT-Human and the automatically generated ADT-Auto. Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2.5-max and so on, thus degrading these LLMs' capabilities. Moreover, our method of automatic data generation has been proven efficient and robust, which can be applied to any open-source LLMs. To the best of our knowledge, our study is the first to investigating LLMs' vulnerability in terms of challenging their token segmentation, which will shed light on the subsequent research of improving LLMs' capabilities through optimizing their tokenization process and algorithms.
doi_str_mv 10.48550/arxiv.2405.17067
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_17067</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_17067</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-3d7f351c0bc2efa6f59b970160aa21837c3ba57d0a7c4d1a94627fb75878ae8a3</originalsourceid><addsrcrecordid>eNpNj7tOxDAURN1QoIUPoMJ8QIIdx75OicJTyoomJVJ0Ez9iERzkZBHw9ewuFDQzU4yOdAi54CwvtZTsGtNn-MiLksmcA1NwSl7a-dXG8I1rmCPd4rratFzRW-sTmhA9bTB5u8_od7gf29nYaaHrmOadH2k94jTZ6A_PdrQh0f-8M3LicFrs-V9vSHt_19aPWfP88FTfNBkqgEwYcELygfVDYR0qJ6u-AsYVQyy4FjCIHiUYhjCUhmNVqgJcD1KDRqtRbMjlL_ao172n8IbpqztodkdN8QO6iE6F</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization</title><source>arXiv.org</source><creator>Wang, Dixuan ; Li, Yanda ; Jiang, Junyuan ; Ding, Zepeng ; Jiang, Guochao ; Liang, Jiaqing ; Yang, Deqing</creator><creatorcontrib>Wang, Dixuan ; Li, Yanda ; Jiang, Junyuan ; Ding, Zepeng ; Jiang, Guochao ; Liang, Jiaqing ; Yang, Deqing</creatorcontrib><description>Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevitable limitation inherent to all LLMs. In fact, incorrect tokenization is the critical point that hinders LLMs in understanding the input precisely, thus leading to unsatisfactory output. To demonstrate this flaw of LLMs, we construct an adversarial dataset, named as $\textbf{ADT (Adversarial Dataset for Tokenizer)}$, which draws upon the vocabularies of various open-source LLMs to challenge LLMs' tokenization. ADT consists of two subsets: the manually constructed ADT-Human and the automatically generated ADT-Auto. Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2.5-max and so on, thus degrading these LLMs' capabilities. Moreover, our method of automatic data generation has been proven efficient and robust, which can be applied to any open-source LLMs. To the best of our knowledge, our study is the first to investigating LLMs' vulnerability in terms of challenging their token segmentation, which will shed light on the subsequent research of improving LLMs' capabilities through optimizing their tokenization process and algorithms.</description><identifier>DOI: 10.48550/arxiv.2405.17067</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.17067$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.17067$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Dixuan</creatorcontrib><creatorcontrib>Li, Yanda</creatorcontrib><creatorcontrib>Jiang, Junyuan</creatorcontrib><creatorcontrib>Ding, Zepeng</creatorcontrib><creatorcontrib>Jiang, Guochao</creatorcontrib><creatorcontrib>Liang, Jiaqing</creatorcontrib><creatorcontrib>Yang, Deqing</creatorcontrib><title>Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization</title><description>Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevitable limitation inherent to all LLMs. In fact, incorrect tokenization is the critical point that hinders LLMs in understanding the input precisely, thus leading to unsatisfactory output. To demonstrate this flaw of LLMs, we construct an adversarial dataset, named as $\textbf{ADT (Adversarial Dataset for Tokenizer)}$, which draws upon the vocabularies of various open-source LLMs to challenge LLMs' tokenization. ADT consists of two subsets: the manually constructed ADT-Human and the automatically generated ADT-Auto. Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2.5-max and so on, thus degrading these LLMs' capabilities. Moreover, our method of automatic data generation has been proven efficient and robust, which can be applied to any open-source LLMs. To the best of our knowledge, our study is the first to investigating LLMs' vulnerability in terms of challenging their token segmentation, which will shed light on the subsequent research of improving LLMs' capabilities through optimizing their tokenization process and algorithms.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNj7tOxDAURN1QoIUPoMJ8QIIdx75OicJTyoomJVJ0Ez9iERzkZBHw9ewuFDQzU4yOdAi54CwvtZTsGtNn-MiLksmcA1NwSl7a-dXG8I1rmCPd4rratFzRW-sTmhA9bTB5u8_od7gf29nYaaHrmOadH2k94jTZ6A_PdrQh0f-8M3LicFrs-V9vSHt_19aPWfP88FTfNBkqgEwYcELygfVDYR0qJ6u-AsYVQyy4FjCIHiUYhjCUhmNVqgJcD1KDRqtRbMjlL_ao172n8IbpqztodkdN8QO6iE6F</recordid><startdate>20240527</startdate><enddate>20240527</enddate><creator>Wang, Dixuan</creator><creator>Li, Yanda</creator><creator>Jiang, Junyuan</creator><creator>Ding, Zepeng</creator><creator>Jiang, Guochao</creator><creator>Liang, Jiaqing</creator><creator>Yang, Deqing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240527</creationdate><title>Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization</title><author>Wang, Dixuan ; Li, Yanda ; Jiang, Junyuan ; Ding, Zepeng ; Jiang, Guochao ; Liang, Jiaqing ; Yang, Deqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-3d7f351c0bc2efa6f59b970160aa21837c3ba57d0a7c4d1a94627fb75878ae8a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Dixuan</creatorcontrib><creatorcontrib>Li, Yanda</creatorcontrib><creatorcontrib>Jiang, Junyuan</creatorcontrib><creatorcontrib>Ding, Zepeng</creatorcontrib><creatorcontrib>Jiang, Guochao</creatorcontrib><creatorcontrib>Liang, Jiaqing</creatorcontrib><creatorcontrib>Yang, Deqing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Dixuan</au><au>Li, Yanda</au><au>Jiang, Junyuan</au><au>Ding, Zepeng</au><au>Jiang, Guochao</au><au>Liang, Jiaqing</au><au>Yang, Deqing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization</atitle><date>2024-05-27</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have shown remarkable capabilities in language understanding and generation. Nonetheless, it was also witnessed that LLMs tend to produce inaccurate responses to specific queries. This deficiency can be traced to the tokenization step LLMs must undergo, which is an inevitable limitation inherent to all LLMs. In fact, incorrect tokenization is the critical point that hinders LLMs in understanding the input precisely, thus leading to unsatisfactory output. To demonstrate this flaw of LLMs, we construct an adversarial dataset, named as $\textbf{ADT (Adversarial Dataset for Tokenizer)}$, which draws upon the vocabularies of various open-source LLMs to challenge LLMs' tokenization. ADT consists of two subsets: the manually constructed ADT-Human and the automatically generated ADT-Auto. Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2.5-max and so on, thus degrading these LLMs' capabilities. Moreover, our method of automatic data generation has been proven efficient and robust, which can be applied to any open-source LLMs. To the best of our knowledge, our study is the first to investigating LLMs' vulnerability in terms of challenging their token segmentation, which will shed light on the subsequent research of improving LLMs' capabilities through optimizing their tokenization process and algorithms.</abstract><doi>10.48550/arxiv.2405.17067</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.17067
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_17067
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T08%3A04%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Tokenization%20Matters!%20Degrading%20Large%20Language%20Models%20through%20Challenging%20Their%20Tokenization&rft.au=Wang,%20Dixuan&rft.date=2024-05-27&rft_id=info:doi/10.48550/arxiv.2405.17067&rft_dat=%3Carxiv_GOX%3E2405_17067%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true