Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations
We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of co...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-05 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Nghi D Q Bui Yu, Yijun Jiang, Lingxiao |
description | We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks. |
doi_str_mv | 10.48550/arxiv.2009.02731 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2009_02731</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2440760657</sourcerecordid><originalsourceid>FETCH-LOGICAL-a527-c22abec82ea33c31f051ef8de151282372d3d3aac1f1e48d0623a2abd9cbc24d3</originalsourceid><addsrcrecordid>eNotkMtqwzAQRUWh0JDmA7qqoGun0siynWUJfQQCLXX2ZiKNi0IsJ5Jt2n59naSrWcy5l8th7E6KeVpoLR4xfLthDkIs5gJyJa_YBJSSSZEC3LBZjDshBGQ5aK0m7FjSvk7K_kBhcJEsX7a-Cxg7NxBfEwbv_Bev2zA-LPFP6oKjAfccveVl3zQY3C92rvV8cMhLatB3ziQfgeKpcgxvAvo4NjRnLN6y6xr3kWb_d8o2L8-b5Vuyfn9dLZ_WCWrIEwOAWzIFECpllKyFllQXlqSWUIDKwSqrEI2sJaWFFRkoHCN2YbYGUqum7P5Se_ZRHYIbp_5UJy_V2ctIPFyIQ2iPPcWu2rV98OOmCtJU5JnIdK7-ABARaJY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2440760657</pqid></control><display><type>article</type><title>Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Nghi D Q Bui ; Yu, Yijun ; Jiang, Lingxiao</creator><creatorcontrib>Nghi D Q Bui ; Yu, Yijun ; Jiang, Lingxiao</creatorcontrib><description>We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2009.02731</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Clustering ; Coders ; Computer Science - Artificial Intelligence ; Computer Science - Learning ; Computer Science - Programming Languages ; Computer Science - Software Engineering ; Learning ; Representations ; Semantics ; Source code</subject><ispartof>arXiv.org, 2021-05</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2009.02731$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3404835.3462840$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Nghi D Q Bui</creatorcontrib><creatorcontrib>Yu, Yijun</creatorcontrib><creatorcontrib>Jiang, Lingxiao</creatorcontrib><title>Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations</title><title>arXiv.org</title><description>We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.</description><subject>Artificial neural networks</subject><subject>Clustering</subject><subject>Coders</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Programming Languages</subject><subject>Computer Science - Software Engineering</subject><subject>Learning</subject><subject>Representations</subject><subject>Semantics</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkMtqwzAQRUWh0JDmA7qqoGun0siynWUJfQQCLXX2ZiKNi0IsJ5Jt2n59naSrWcy5l8th7E6KeVpoLR4xfLthDkIs5gJyJa_YBJSSSZEC3LBZjDshBGQ5aK0m7FjSvk7K_kBhcJEsX7a-Cxg7NxBfEwbv_Bev2zA-LPFP6oKjAfccveVl3zQY3C92rvV8cMhLatB3ziQfgeKpcgxvAvo4NjRnLN6y6xr3kWb_d8o2L8-b5Vuyfn9dLZ_WCWrIEwOAWzIFECpllKyFllQXlqSWUIDKwSqrEI2sJaWFFRkoHCN2YbYGUqum7P5Se_ZRHYIbp_5UJy_V2ctIPFyIQ2iPPcWu2rV98OOmCtJU5JnIdK7-ABARaJY</recordid><startdate>20210523</startdate><enddate>20210523</enddate><creator>Nghi D Q Bui</creator><creator>Yu, Yijun</creator><creator>Jiang, Lingxiao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210523</creationdate><title>Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations</title><author>Nghi D Q Bui ; Yu, Yijun ; Jiang, Lingxiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a527-c22abec82ea33c31f051ef8de151282372d3d3aac1f1e48d0623a2abd9cbc24d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Clustering</topic><topic>Coders</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Programming Languages</topic><topic>Computer Science - Software Engineering</topic><topic>Learning</topic><topic>Representations</topic><topic>Semantics</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Nghi D Q Bui</creatorcontrib><creatorcontrib>Yu, Yijun</creatorcontrib><creatorcontrib>Jiang, Lingxiao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nghi D Q Bui</au><au>Yu, Yijun</au><au>Jiang, Lingxiao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations</atitle><jtitle>arXiv.org</jtitle><date>2021-05-23</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2009.02731</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2009_02731 |
source | arXiv.org; Free E- Journals |
subjects | Artificial neural networks Clustering Coders Computer Science - Artificial Intelligence Computer Science - Learning Computer Science - Programming Languages Computer Science - Software Engineering Learning Representations Semantics Source code |
title | Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T17%3A55%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-Supervised%20Contrastive%20Learning%20for%20Code%20Retrieval%20and%20Summarization%20via%20Semantic-Preserving%20Transformations&rft.jtitle=arXiv.org&rft.au=Nghi%20D%20Q%20Bui&rft.date=2021-05-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2009.02731&rft_dat=%3Cproquest_arxiv%3E2440760657%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2440760657&rft_id=info:pmid/&rfr_iscdi=true |