Legion: Best-First Concolic Testing
Concolic execution and fuzzing are two complementary coverage-based testing techniques. How to achieve the best of both remains an open challenge. To address this research problem, we propose and evaluate Legion. Legion re-engineers the Monte Carlo tree search (MCTS) framework from the AI literature...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-09 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Liu, Dongge Ernst, Gidon Murray, Toby Rubinstein, Benjamin I P |
description | Concolic execution and fuzzing are two complementary coverage-based testing techniques. How to achieve the best of both remains an open challenge. To address this research problem, we propose and evaluate Legion. Legion re-engineers the Monte Carlo tree search (MCTS) framework from the AI literature to treat automated test generation as a problem of sequential decision-making under uncertainty. Its best-first search strategy provides a principled way to learn the most promising program states to investigate at each search iteration, based on observed rewards from previous iterations. Legion incorporates a form of directed fuzzing that we call approximate path-preserving fuzzing (APPFuzzing) to investigate program states selected by MCTS. APPFuzzing serves as the Monte Carlo simulation technique and is implemented by extending prior work on constrained sampling. We evaluate Legion against competitors on 2531 benchmarks from the coverage category of Test-Comp 2020, as well as measuring its sensitivity to hyperparameters, demonstrating its effectiveness on a wide variety of input programs. |
doi_str_mv | 10.48550/arxiv.2002.06311 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2002_06311</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2357103793</sourcerecordid><originalsourceid>FETCH-LOGICAL-a523-1368db4ed621cc379c844d4c73e665e2b32e40c4b43592203849123d8eea944d3</originalsourceid><addsrcrecordid>eNotj8FKAzEQhoMgWGofwJMLPWdNZpJs1psurQoLXvYestlYUupuTVrRt29sPQ3MfPzzf4TccVYKLSV7sPEnfJfAGJRMIedXZAaInGoBcEMWKW1ZvqkKpMQZWbZ-E6bxsXj26UDXIaZD0Uyjm3bBFV3ehXFzS64_7C75xf-ck2696ppX2r6_vDVPLbUSkHJUeuiFHxRw57CqnRZiEK5Cr5T00CN4wZzoBcoagKEWNQcctPe2ziTOyf0l9mxg9jF82vhr_kzM2SQTywuxj9PXMZcz2-kYx9zJAMqKs_wV8QRIXEht</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2357103793</pqid></control><display><type>article</type><title>Legion: Best-First Concolic Testing</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Liu, Dongge ; Ernst, Gidon ; Murray, Toby ; Rubinstein, Benjamin I P</creator><creatorcontrib>Liu, Dongge ; Ernst, Gidon ; Murray, Toby ; Rubinstein, Benjamin I P</creatorcontrib><description>Concolic execution and fuzzing are two complementary coverage-based testing techniques. How to achieve the best of both remains an open challenge. To address this research problem, we propose and evaluate Legion. Legion re-engineers the Monte Carlo tree search (MCTS) framework from the AI literature to treat automated test generation as a problem of sequential decision-making under uncertainty. Its best-first search strategy provides a principled way to learn the most promising program states to investigate at each search iteration, based on observed rewards from previous iterations. Legion incorporates a form of directed fuzzing that we call approximate path-preserving fuzzing (APPFuzzing) to investigate program states selected by MCTS. APPFuzzing serves as the Monte Carlo simulation technique and is implemented by extending prior work on constrained sampling. We evaluate Legion against competitors on 2531 benchmarks from the coverage category of Test-Comp 2020, as well as measuring its sensitivity to hyperparameters, demonstrating its effectiveness on a wide variety of input programs.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2002.06311</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Learning ; Computer Science - Software Engineering ; Computer simulation ; Monte Carlo simulation ; Program verification (computers)</subject><ispartof>arXiv.org, 2020-09</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.06311$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3324884.3416629$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Dongge</creatorcontrib><creatorcontrib>Ernst, Gidon</creatorcontrib><creatorcontrib>Murray, Toby</creatorcontrib><creatorcontrib>Rubinstein, Benjamin I P</creatorcontrib><title>Legion: Best-First Concolic Testing</title><title>arXiv.org</title><description>Concolic execution and fuzzing are two complementary coverage-based testing techniques. How to achieve the best of both remains an open challenge. To address this research problem, we propose and evaluate Legion. Legion re-engineers the Monte Carlo tree search (MCTS) framework from the AI literature to treat automated test generation as a problem of sequential decision-making under uncertainty. Its best-first search strategy provides a principled way to learn the most promising program states to investigate at each search iteration, based on observed rewards from previous iterations. Legion incorporates a form of directed fuzzing that we call approximate path-preserving fuzzing (APPFuzzing) to investigate program states selected by MCTS. APPFuzzing serves as the Monte Carlo simulation technique and is implemented by extending prior work on constrained sampling. We evaluate Legion against competitors on 2531 benchmarks from the coverage category of Test-Comp 2020, as well as measuring its sensitivity to hyperparameters, demonstrating its effectiveness on a wide variety of input programs.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Software Engineering</subject><subject>Computer simulation</subject><subject>Monte Carlo simulation</subject><subject>Program verification (computers)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8FKAzEQhoMgWGofwJMLPWdNZpJs1psurQoLXvYestlYUupuTVrRt29sPQ3MfPzzf4TccVYKLSV7sPEnfJfAGJRMIedXZAaInGoBcEMWKW1ZvqkKpMQZWbZ-E6bxsXj26UDXIaZD0Uyjm3bBFV3ehXFzS64_7C75xf-ck2696ppX2r6_vDVPLbUSkHJUeuiFHxRw57CqnRZiEK5Cr5T00CN4wZzoBcoagKEWNQcctPe2ziTOyf0l9mxg9jF82vhr_kzM2SQTywuxj9PXMZcz2-kYx9zJAMqKs_wV8QRIXEht</recordid><startdate>20200923</startdate><enddate>20200923</enddate><creator>Liu, Dongge</creator><creator>Ernst, Gidon</creator><creator>Murray, Toby</creator><creator>Rubinstein, Benjamin I P</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200923</creationdate><title>Legion: Best-First Concolic Testing</title><author>Liu, Dongge ; Ernst, Gidon ; Murray, Toby ; Rubinstein, Benjamin I P</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a523-1368db4ed621cc379c844d4c73e665e2b32e40c4b43592203849123d8eea944d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Software Engineering</topic><topic>Computer simulation</topic><topic>Monte Carlo simulation</topic><topic>Program verification (computers)</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Dongge</creatorcontrib><creatorcontrib>Ernst, Gidon</creatorcontrib><creatorcontrib>Murray, Toby</creatorcontrib><creatorcontrib>Rubinstein, Benjamin I P</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Dongge</au><au>Ernst, Gidon</au><au>Murray, Toby</au><au>Rubinstein, Benjamin I P</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Legion: Best-First Concolic Testing</atitle><jtitle>arXiv.org</jtitle><date>2020-09-23</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Concolic execution and fuzzing are two complementary coverage-based testing techniques. How to achieve the best of both remains an open challenge. To address this research problem, we propose and evaluate Legion. Legion re-engineers the Monte Carlo tree search (MCTS) framework from the AI literature to treat automated test generation as a problem of sequential decision-making under uncertainty. Its best-first search strategy provides a principled way to learn the most promising program states to investigate at each search iteration, based on observed rewards from previous iterations. Legion incorporates a form of directed fuzzing that we call approximate path-preserving fuzzing (APPFuzzing) to investigate program states selected by MCTS. APPFuzzing serves as the Monte Carlo simulation technique and is implemented by extending prior work on constrained sampling. We evaluate Legion against competitors on 2531 benchmarks from the coverage category of Test-Comp 2020, as well as measuring its sensitivity to hyperparameters, demonstrating its effectiveness on a wide variety of input programs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2002.06311</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2002_06311 |
source | arXiv.org; Free E- Journals |
subjects | Computer Science - Learning Computer Science - Software Engineering Computer simulation Monte Carlo simulation Program verification (computers) |
title | Legion: Best-First Concolic Testing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T02%3A56%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Legion:%20Best-First%20Concolic%20Testing&rft.jtitle=arXiv.org&rft.au=Liu,%20Dongge&rft.date=2020-09-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2002.06311&rft_dat=%3Cproquest_arxiv%3E2357103793%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2357103793&rft_id=info:pmid/&rfr_iscdi=true |