Is word order considered by foundation models? A comparative task-oriented analysis

Word order, a linguistic concept essential for conveying accurate meaning, is seemingly not that necessary in language models based on the existing works. Contrary to this prevailing notion, our paper delves into the impacts of word order by employing carefully selected tasks that demand distinct ab...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2024-05, Vol.241, p.122700, Article 122700
Hauptverfasser: Zhao, Qinghua, Li, Jiaang, Liu, Junfeng, Kang, Zhongfeng, Zhou, Zenghui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Word order, a linguistic concept essential for conveying accurate meaning, is seemingly not that necessary in language models based on the existing works. Contrary to this prevailing notion, our paper delves into the impacts of word order by employing carefully selected tasks that demand distinct abilities. Using three large language model families (ChatGPT, Claude, LLaMA), three controllable word order perturbation strategies, one novel perturbation qualification metric, four well-chosen tasks, and three languages, we conduct experiments to shed light on this topic. Empirical findings demonstrate that Foundation models take word order into consideration during generation. Moreover, tasks emphasizing reasoning abilities exhibit a greater reliance on word order compared to those primarily based on world knowledge. •Word order is reexamined by 4 tasks, 3 strategies, 3 languages and 5 models.•The tested datasets includes TruthfulQA, MGSM, XWinoGrande and WiQueen.•The word order perturbation strategies include Random, Rotate and Adjacent.•Both English, Chinese, and French dataset are tested on ChatGPT, Claude and LLaMA.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.122700