Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks

We prove the availability of inner product form position encoding in the state space dual algorithm and study the effectiveness of different position embeddings in the hybrid quadratic causal self-attention and state space dual algorithms. We propose inner function attention with dynamic mask, which...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shi, Jingze, Wu, Bingheng, He, Lu, Jiang, Luchang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shi, Jingze
Wu, Bingheng
He, Lu
Jiang, Luchang
description We prove the availability of inner product form position encoding in the state space dual algorithm and study the effectiveness of different position embeddings in the hybrid quadratic causal self-attention and state space dual algorithms. We propose inner function attention with dynamic mask, which can improve the expressiveness of the attention algorithm and avoid the sequence noise significantly affecting the accuracy of the attention score. We also design cross domain mixture of experts, which can improve the granularity of the sparse activation feedforward network while maintaining the efficiency of parameter utilization and retrieval. The combination of these methods constitutes our foundation model architecture: Wonderful Matrices. We conduct experiments on the language modeling task and find that Wonderful Matrices are more efficient and effective in handling complex language tasks.
doi_str_mv 10.48550/arxiv.2407.16958
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_16958</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_16958</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_169583</originalsourceid><addsrcrecordid>eNqFjsEKgkAURWfTIqoPaNX8QKalZe0ijBa5C6KVPMY39sjGeDNK_X0q7VtdDpwLR4hp4HthHEX-AvhNjbcM_Y0XrLdRPBS3a2VyZF2XMgXHpNDuZFoxykRrUoTGSTB5R6gcNSj3rO7kWqhbSVcsz2CKGgpsbzmWZAp5AfuwYzHQUFqc_HYkZsfkcjjN-4jsxfQE_mRdTNbHrP4bX1noP9o</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks</title><source>arXiv.org</source><creator>Shi, Jingze ; Wu, Bingheng ; He, Lu ; Jiang, Luchang</creator><creatorcontrib>Shi, Jingze ; Wu, Bingheng ; He, Lu ; Jiang, Luchang</creatorcontrib><description>We prove the availability of inner product form position encoding in the state space dual algorithm and study the effectiveness of different position embeddings in the hybrid quadratic causal self-attention and state space dual algorithms. We propose inner function attention with dynamic mask, which can improve the expressiveness of the attention algorithm and avoid the sequence noise significantly affecting the accuracy of the attention score. We also design cross domain mixture of experts, which can improve the granularity of the sparse activation feedforward network while maintaining the efficiency of parameter utilization and retrieval. The combination of these methods constitutes our foundation model architecture: Wonderful Matrices. We conduct experiments on the language modeling task and find that Wonderful Matrices are more efficient and effective in handling complex language tasks.</description><identifier>DOI: 10.48550/arxiv.2407.16958</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.16958$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.16958$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shi, Jingze</creatorcontrib><creatorcontrib>Wu, Bingheng</creatorcontrib><creatorcontrib>He, Lu</creatorcontrib><creatorcontrib>Jiang, Luchang</creatorcontrib><title>Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks</title><description>We prove the availability of inner product form position encoding in the state space dual algorithm and study the effectiveness of different position embeddings in the hybrid quadratic causal self-attention and state space dual algorithms. We propose inner function attention with dynamic mask, which can improve the expressiveness of the attention algorithm and avoid the sequence noise significantly affecting the accuracy of the attention score. We also design cross domain mixture of experts, which can improve the granularity of the sparse activation feedforward network while maintaining the efficiency of parameter utilization and retrieval. The combination of these methods constitutes our foundation model architecture: Wonderful Matrices. We conduct experiments on the language modeling task and find that Wonderful Matrices are more efficient and effective in handling complex language tasks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjsEKgkAURWfTIqoPaNX8QKalZe0ijBa5C6KVPMY39sjGeDNK_X0q7VtdDpwLR4hp4HthHEX-AvhNjbcM_Y0XrLdRPBS3a2VyZF2XMgXHpNDuZFoxykRrUoTGSTB5R6gcNSj3rO7kWqhbSVcsz2CKGgpsbzmWZAp5AfuwYzHQUFqc_HYkZsfkcjjN-4jsxfQE_mRdTNbHrP4bX1noP9o</recordid><startdate>20240723</startdate><enddate>20240723</enddate><creator>Shi, Jingze</creator><creator>Wu, Bingheng</creator><creator>He, Lu</creator><creator>Jiang, Luchang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240723</creationdate><title>Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks</title><author>Shi, Jingze ; Wu, Bingheng ; He, Lu ; Jiang, Luchang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_169583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shi, Jingze</creatorcontrib><creatorcontrib>Wu, Bingheng</creatorcontrib><creatorcontrib>He, Lu</creatorcontrib><creatorcontrib>Jiang, Luchang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shi, Jingze</au><au>Wu, Bingheng</au><au>He, Lu</au><au>Jiang, Luchang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks</atitle><date>2024-07-23</date><risdate>2024</risdate><abstract>We prove the availability of inner product form position encoding in the state space dual algorithm and study the effectiveness of different position embeddings in the hybrid quadratic causal self-attention and state space dual algorithms. We propose inner function attention with dynamic mask, which can improve the expressiveness of the attention algorithm and avoid the sequence noise significantly affecting the accuracy of the attention score. We also design cross domain mixture of experts, which can improve the granularity of the sparse activation feedforward network while maintaining the efficiency of parameter utilization and retrieval. The combination of these methods constitutes our foundation model architecture: Wonderful Matrices. We conduct experiments on the language modeling task and find that Wonderful Matrices are more efficient and effective in handling complex language tasks.</abstract><doi>10.48550/arxiv.2407.16958</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.16958
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_16958
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Wonderful Matrices: More Efficient and Effective Architecture for Language Modeling Tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T19%3A40%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Wonderful%20Matrices:%20More%20Efficient%20and%20Effective%20Architecture%20for%20Language%20Modeling%20Tasks&rft.au=Shi,%20Jingze&rft.date=2024-07-23&rft_id=info:doi/10.48550/arxiv.2407.16958&rft_dat=%3Carxiv_GOX%3E2407_16958%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true