(Security) Assertions by Large Language Models

The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Kande, Rahul, Hammond Pearce, Tan, Benjamin, Dolan-Gavitt, Brendan, Thakur, Shailja, Karri, Ramesh, Rajendran, Jeyavijayan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Kande, Rahul
Hammond Pearce
Tan, Benjamin
Dolan-Gavitt, Brendan
Thakur, Shailja
Karri, Ramesh
Rajendran, Jeyavijayan
description The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.
doi_str_mv 10.48550/arxiv.2306.14027
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2306_14027</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2829985234</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-cbc5b089754819b4f2f44ece141147cba162b9b7b34588038fe3948e9817fe5e3</originalsourceid><addsrcrecordid>eNotjz9PwzAUxC0kJKrSD8BEJBYYktrPz7E9VhX_pFQMdI_s8FKlKkmxE0S-PaFlubvhdLofYzeCZ2iU4ksXfprvDCTPM4Ec9AWbgZQiNQhwxRYx7jnnkGtQSs5Ydv9O1RCafnxIVjFS6JuujYkfk8KFHU3a7gY3hU33QYd4zS5rd4i0-Pc52z49btcvafH2_LpeFalTgGnlK-W5sVqhEdZjDTUiVSRQCNSVdyIHb732EpUxXJqapEVD1ghdkyI5Z7fn2RNMeQzNpwtj-QdVnqCmxt25cQzd10CxL_fdENrpUwkGrDUKJMpfbspNDw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2829985234</pqid></control><display><type>article</type><title>(Security) Assertions by Large Language Models</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Kande, Rahul ; Hammond Pearce ; Tan, Benjamin ; Dolan-Gavitt, Brendan ; Thakur, Shailja ; Karri, Ramesh ; Rajendran, Jeyavijayan</creator><creatorcontrib>Kande, Rahul ; Hammond Pearce ; Tan, Benjamin ; Dolan-Gavitt, Brendan ; Thakur, Shailja ; Karri, Ramesh ; Rajendran, Jeyavijayan</creatorcontrib><description>The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2306.14027</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Artificial Intelligence ; Computer Science - Cryptography and Security ; Hardware ; Large language models ; Natural language (computers) ; Security ; Verification</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/TIFS.2024.3372809$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.14027$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kande, Rahul</creatorcontrib><creatorcontrib>Hammond Pearce</creatorcontrib><creatorcontrib>Tan, Benjamin</creatorcontrib><creatorcontrib>Dolan-Gavitt, Brendan</creatorcontrib><creatorcontrib>Thakur, Shailja</creatorcontrib><creatorcontrib>Karri, Ramesh</creatorcontrib><creatorcontrib>Rajendran, Jeyavijayan</creatorcontrib><title>(Security) Assertions by Large Language Models</title><title>arXiv.org</title><description>The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Cryptography and Security</subject><subject>Hardware</subject><subject>Large language models</subject><subject>Natural language (computers)</subject><subject>Security</subject><subject>Verification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotjz9PwzAUxC0kJKrSD8BEJBYYktrPz7E9VhX_pFQMdI_s8FKlKkmxE0S-PaFlubvhdLofYzeCZ2iU4ksXfprvDCTPM4Ec9AWbgZQiNQhwxRYx7jnnkGtQSs5Ydv9O1RCafnxIVjFS6JuujYkfk8KFHU3a7gY3hU33QYd4zS5rd4i0-Pc52z49btcvafH2_LpeFalTgGnlK-W5sVqhEdZjDTUiVSRQCNSVdyIHb732EpUxXJqapEVD1ghdkyI5Z7fn2RNMeQzNpwtj-QdVnqCmxt25cQzd10CxL_fdENrpUwkGrDUKJMpfbspNDw</recordid><startdate>20240709</startdate><enddate>20240709</enddate><creator>Kande, Rahul</creator><creator>Hammond Pearce</creator><creator>Tan, Benjamin</creator><creator>Dolan-Gavitt, Brendan</creator><creator>Thakur, Shailja</creator><creator>Karri, Ramesh</creator><creator>Rajendran, Jeyavijayan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240709</creationdate><title>(Security) Assertions by Large Language Models</title><author>Kande, Rahul ; Hammond Pearce ; Tan, Benjamin ; Dolan-Gavitt, Brendan ; Thakur, Shailja ; Karri, Ramesh ; Rajendran, Jeyavijayan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-cbc5b089754819b4f2f44ece141147cba162b9b7b34588038fe3948e9817fe5e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Cryptography and Security</topic><topic>Hardware</topic><topic>Large language models</topic><topic>Natural language (computers)</topic><topic>Security</topic><topic>Verification</topic><toplevel>online_resources</toplevel><creatorcontrib>Kande, Rahul</creatorcontrib><creatorcontrib>Hammond Pearce</creatorcontrib><creatorcontrib>Tan, Benjamin</creatorcontrib><creatorcontrib>Dolan-Gavitt, Brendan</creatorcontrib><creatorcontrib>Thakur, Shailja</creatorcontrib><creatorcontrib>Karri, Ramesh</creatorcontrib><creatorcontrib>Rajendran, Jeyavijayan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kande, Rahul</au><au>Hammond Pearce</au><au>Tan, Benjamin</au><au>Dolan-Gavitt, Brendan</au><au>Thakur, Shailja</au><au>Karri, Ramesh</au><au>Rajendran, Jeyavijayan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>(Security) Assertions by Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-07-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2306.14027</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2306_14027
source arXiv.org; Free E- Journals
subjects Computer Science - Artificial Intelligence
Computer Science - Cryptography and Security
Hardware
Large language models
Natural language (computers)
Security
Verification
title (Security) Assertions by Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T00%3A44%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=(Security)%20Assertions%20by%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Kande,%20Rahul&rft.date=2024-07-09&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2306.14027&rft_dat=%3Cproquest_arxiv%3E2829985234%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2829985234&rft_id=info:pmid/&rfr_iscdi=true