Falcon2-11B Technical Report

We introduce Falcon2-11B, a foundation model trained on over five trillion tokens, and its multimodal counterpart, Falcon2-11B-vlm, which is a vision-to-text model. We report our findings during the training of the Falcon2-11B which follows a multi-stage approach where the early stages are distingui...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Malartic, Quentin, Chowdhury, Nilabhra Roy, Cojocaru, Ruxandra, Farooq, Mugariya, Campesan, Giulia, Djilali, Yasser Abdelaziz Dahou, Narayan, Sanath, Singh, Ankit, Velikanov, Maksim, Boussaha, Basma El Amel, Al-Yafeai, Mohammed, Alobeidli, Hamza, Qadi, Leen Al, Seddik, Mohamed El Amine, Fedyanin, Kirill, Alami, Reda, Hacid, Hakim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Malartic, Quentin
Chowdhury, Nilabhra Roy
Cojocaru, Ruxandra
Farooq, Mugariya
Campesan, Giulia
Djilali, Yasser Abdelaziz Dahou
Narayan, Sanath
Singh, Ankit
Velikanov, Maksim
Boussaha, Basma El Amel
Al-Yafeai, Mohammed
Alobeidli, Hamza
Qadi, Leen Al
Seddik, Mohamed El Amine
Fedyanin, Kirill
Alami, Reda
Hacid, Hakim
description We introduce Falcon2-11B, a foundation model trained on over five trillion tokens, and its multimodal counterpart, Falcon2-11B-vlm, which is a vision-to-text model. We report our findings during the training of the Falcon2-11B which follows a multi-stage approach where the early stages are distinguished by their context length and a final stage where we use a curated, high-quality dataset. Additionally, we report the effect of doubling the batch size mid-training and how training loss spikes are affected by the learning rate. The downstream performance of the foundation model is evaluated on established benchmarks, including multilingual and code datasets. The foundation model shows strong generalization across all the tasks which makes it suitable for downstream finetuning use cases. For the vision language model, we report the performance on several benchmarks and show that our model achieves a higher average score compared to open-source models of similar size. The model weights and code of both Falcon2-11B and Falcon2-11B-vlm are made available under a permissive license.
doi_str_mv 10.48550/arxiv.2407.14885
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_14885</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_14885</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_148853</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zM0sbAw5WSQcUvMSc7PM9I1NHRSCElNzsjLTE7MUQhKLcgvKuFhYE1LzClO5YXS3Azybq4hzh66YHPiC4oycxOLKuNB5sWDzTMmrAIAznAoYg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Falcon2-11B Technical Report</title><source>arXiv.org</source><creator>Malartic, Quentin ; Chowdhury, Nilabhra Roy ; Cojocaru, Ruxandra ; Farooq, Mugariya ; Campesan, Giulia ; Djilali, Yasser Abdelaziz Dahou ; Narayan, Sanath ; Singh, Ankit ; Velikanov, Maksim ; Boussaha, Basma El Amel ; Al-Yafeai, Mohammed ; Alobeidli, Hamza ; Qadi, Leen Al ; Seddik, Mohamed El Amine ; Fedyanin, Kirill ; Alami, Reda ; Hacid, Hakim</creator><creatorcontrib>Malartic, Quentin ; Chowdhury, Nilabhra Roy ; Cojocaru, Ruxandra ; Farooq, Mugariya ; Campesan, Giulia ; Djilali, Yasser Abdelaziz Dahou ; Narayan, Sanath ; Singh, Ankit ; Velikanov, Maksim ; Boussaha, Basma El Amel ; Al-Yafeai, Mohammed ; Alobeidli, Hamza ; Qadi, Leen Al ; Seddik, Mohamed El Amine ; Fedyanin, Kirill ; Alami, Reda ; Hacid, Hakim</creatorcontrib><description>We introduce Falcon2-11B, a foundation model trained on over five trillion tokens, and its multimodal counterpart, Falcon2-11B-vlm, which is a vision-to-text model. We report our findings during the training of the Falcon2-11B which follows a multi-stage approach where the early stages are distinguished by their context length and a final stage where we use a curated, high-quality dataset. Additionally, we report the effect of doubling the batch size mid-training and how training loss spikes are affected by the learning rate. The downstream performance of the foundation model is evaluated on established benchmarks, including multilingual and code datasets. The foundation model shows strong generalization across all the tasks which makes it suitable for downstream finetuning use cases. For the vision language model, we report the performance on several benchmarks and show that our model achieves a higher average score compared to open-source models of similar size. The model weights and code of both Falcon2-11B and Falcon2-11B-vlm are made available under a permissive license.</description><identifier>DOI: 10.48550/arxiv.2407.14885</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.14885$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.14885$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Malartic, Quentin</creatorcontrib><creatorcontrib>Chowdhury, Nilabhra Roy</creatorcontrib><creatorcontrib>Cojocaru, Ruxandra</creatorcontrib><creatorcontrib>Farooq, Mugariya</creatorcontrib><creatorcontrib>Campesan, Giulia</creatorcontrib><creatorcontrib>Djilali, Yasser Abdelaziz Dahou</creatorcontrib><creatorcontrib>Narayan, Sanath</creatorcontrib><creatorcontrib>Singh, Ankit</creatorcontrib><creatorcontrib>Velikanov, Maksim</creatorcontrib><creatorcontrib>Boussaha, Basma El Amel</creatorcontrib><creatorcontrib>Al-Yafeai, Mohammed</creatorcontrib><creatorcontrib>Alobeidli, Hamza</creatorcontrib><creatorcontrib>Qadi, Leen Al</creatorcontrib><creatorcontrib>Seddik, Mohamed El Amine</creatorcontrib><creatorcontrib>Fedyanin, Kirill</creatorcontrib><creatorcontrib>Alami, Reda</creatorcontrib><creatorcontrib>Hacid, Hakim</creatorcontrib><title>Falcon2-11B Technical Report</title><description>We introduce Falcon2-11B, a foundation model trained on over five trillion tokens, and its multimodal counterpart, Falcon2-11B-vlm, which is a vision-to-text model. We report our findings during the training of the Falcon2-11B which follows a multi-stage approach where the early stages are distinguished by their context length and a final stage where we use a curated, high-quality dataset. Additionally, we report the effect of doubling the batch size mid-training and how training loss spikes are affected by the learning rate. The downstream performance of the foundation model is evaluated on established benchmarks, including multilingual and code datasets. The foundation model shows strong generalization across all the tasks which makes it suitable for downstream finetuning use cases. For the vision language model, we report the performance on several benchmarks and show that our model achieves a higher average score compared to open-source models of similar size. The model weights and code of both Falcon2-11B and Falcon2-11B-vlm are made available under a permissive license.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zM0sbAw5WSQcUvMSc7PM9I1NHRSCElNzsjLTE7MUQhKLcgvKuFhYE1LzClO5YXS3Azybq4hzh66YHPiC4oycxOLKuNB5sWDzTMmrAIAznAoYg</recordid><startdate>20240720</startdate><enddate>20240720</enddate><creator>Malartic, Quentin</creator><creator>Chowdhury, Nilabhra Roy</creator><creator>Cojocaru, Ruxandra</creator><creator>Farooq, Mugariya</creator><creator>Campesan, Giulia</creator><creator>Djilali, Yasser Abdelaziz Dahou</creator><creator>Narayan, Sanath</creator><creator>Singh, Ankit</creator><creator>Velikanov, Maksim</creator><creator>Boussaha, Basma El Amel</creator><creator>Al-Yafeai, Mohammed</creator><creator>Alobeidli, Hamza</creator><creator>Qadi, Leen Al</creator><creator>Seddik, Mohamed El Amine</creator><creator>Fedyanin, Kirill</creator><creator>Alami, Reda</creator><creator>Hacid, Hakim</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240720</creationdate><title>Falcon2-11B Technical Report</title><author>Malartic, Quentin ; Chowdhury, Nilabhra Roy ; Cojocaru, Ruxandra ; Farooq, Mugariya ; Campesan, Giulia ; Djilali, Yasser Abdelaziz Dahou ; Narayan, Sanath ; Singh, Ankit ; Velikanov, Maksim ; Boussaha, Basma El Amel ; Al-Yafeai, Mohammed ; Alobeidli, Hamza ; Qadi, Leen Al ; Seddik, Mohamed El Amine ; Fedyanin, Kirill ; Alami, Reda ; Hacid, Hakim</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_148853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Malartic, Quentin</creatorcontrib><creatorcontrib>Chowdhury, Nilabhra Roy</creatorcontrib><creatorcontrib>Cojocaru, Ruxandra</creatorcontrib><creatorcontrib>Farooq, Mugariya</creatorcontrib><creatorcontrib>Campesan, Giulia</creatorcontrib><creatorcontrib>Djilali, Yasser Abdelaziz Dahou</creatorcontrib><creatorcontrib>Narayan, Sanath</creatorcontrib><creatorcontrib>Singh, Ankit</creatorcontrib><creatorcontrib>Velikanov, Maksim</creatorcontrib><creatorcontrib>Boussaha, Basma El Amel</creatorcontrib><creatorcontrib>Al-Yafeai, Mohammed</creatorcontrib><creatorcontrib>Alobeidli, Hamza</creatorcontrib><creatorcontrib>Qadi, Leen Al</creatorcontrib><creatorcontrib>Seddik, Mohamed El Amine</creatorcontrib><creatorcontrib>Fedyanin, Kirill</creatorcontrib><creatorcontrib>Alami, Reda</creatorcontrib><creatorcontrib>Hacid, Hakim</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Malartic, Quentin</au><au>Chowdhury, Nilabhra Roy</au><au>Cojocaru, Ruxandra</au><au>Farooq, Mugariya</au><au>Campesan, Giulia</au><au>Djilali, Yasser Abdelaziz Dahou</au><au>Narayan, Sanath</au><au>Singh, Ankit</au><au>Velikanov, Maksim</au><au>Boussaha, Basma El Amel</au><au>Al-Yafeai, Mohammed</au><au>Alobeidli, Hamza</au><au>Qadi, Leen Al</au><au>Seddik, Mohamed El Amine</au><au>Fedyanin, Kirill</au><au>Alami, Reda</au><au>Hacid, Hakim</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Falcon2-11B Technical Report</atitle><date>2024-07-20</date><risdate>2024</risdate><abstract>We introduce Falcon2-11B, a foundation model trained on over five trillion tokens, and its multimodal counterpart, Falcon2-11B-vlm, which is a vision-to-text model. We report our findings during the training of the Falcon2-11B which follows a multi-stage approach where the early stages are distinguished by their context length and a final stage where we use a curated, high-quality dataset. Additionally, we report the effect of doubling the batch size mid-training and how training loss spikes are affected by the learning rate. The downstream performance of the foundation model is evaluated on established benchmarks, including multilingual and code datasets. The foundation model shows strong generalization across all the tasks which makes it suitable for downstream finetuning use cases. For the vision language model, we report the performance on several benchmarks and show that our model achieves a higher average score compared to open-source models of similar size. The model weights and code of both Falcon2-11B and Falcon2-11B-vlm are made available under a permissive license.</abstract><doi>10.48550/arxiv.2407.14885</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.14885
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_14885
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
title Falcon2-11B Technical Report
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T13%3A42%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Falcon2-11B%20Technical%20Report&rft.au=Malartic,%20Quentin&rft.date=2024-07-20&rft_id=info:doi/10.48550/arxiv.2407.14885&rft_dat=%3Carxiv_GOX%3E2407_14885%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true