Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files
Increasing usage of Digital Manufacturing (DM) in safety-critical domains is increasing attention on the cybersecurity of the manufacturing process, as malicious third parties might aim to introduce defects in digital designs. In general, the DM process involves creating a digital object (as CAD fil...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-11 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Beckwith, Caleb Naicker, Harsh Sankar Mehta, Svara Udupa, Viba R Nghia Tri Nim Gadre, Varun Hammond Pearce Mac, Gary Gupta, Nikhil |
description | Increasing usage of Digital Manufacturing (DM) in safety-critical domains is increasing attention on the cybersecurity of the manufacturing process, as malicious third parties might aim to introduce defects in digital designs. In general, the DM process involves creating a digital object (as CAD files) before using a slicer program to convert the models into printing instructions (e.g. g-code) suitable for the target printer. As the g-code is an intermediate machine format, malicious edits may be difficult to detect, especially when the golden (original) models are not available to the manufacturer. In this work we aim to quantify this hypothesis through a red-team/blue-team case study, whereby the red-team aims to introduce subtle defects that would impact the properties (strengths) of the 3D printed parts, and the blue-team aims to detect these modifications in the absence of the golden models. The case study had two sets of models, the first with 180 designs (with 2 compromised using 2 methods) and the second with 4320 designs (with 60 compromised using 6 methods). Using statistical modelling and machine learning (ML), the blue-team was able to detect all the compromises in the first set of data, and 50 of the compromises in the second. |
doi_str_mv | 10.48550/arxiv.2111.12746 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2111_12746</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2604248366</sourcerecordid><originalsourceid>FETCH-LOGICAL-a526-f4921048b06c9053b342663f7aaf35ed681a01351f9e0993b45a92feaa6813173</originalsourceid><addsrcrecordid>eNotkMtOwzAQRS0kJKrSD2CFJdYp9viRhF1V-kAqsKBbFE0SG7mEpMRO1f49SctqRjp3Zu5cQu44m8pEKfaI7dEdpsA5n3KIpb4iIxCCR4kEuCET73eMMdAxKCVG5PPNmLIy1NUU6RpPPmDx_USfTTBFcPUX_ejy0PNXrFzhms7TRemCp6Ghs7Lv3GFgdWexCF07DKyioikNXbrK-FtybbHyZvJfx2S7XGzn62jzvnqZzzYRKtCRlSlwJpOc6SJlSuRCgtbCxohWKFPqhCPjQnGbGpamIpcKU7AGsSeCx2JM7i9rz79n-9b9YHvKhgyycwa94uGi2LfNb2d8yHZN19a9pww0kyAT0V_8A3NAXUI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2604248366</pqid></control><display><type>article</type><title>Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Beckwith, Caleb ; Naicker, Harsh Sankar ; Mehta, Svara ; Udupa, Viba R ; Nghia Tri Nim ; Gadre, Varun ; Hammond Pearce ; Mac, Gary ; Gupta, Nikhil</creator><creatorcontrib>Beckwith, Caleb ; Naicker, Harsh Sankar ; Mehta, Svara ; Udupa, Viba R ; Nghia Tri Nim ; Gadre, Varun ; Hammond Pearce ; Mac, Gary ; Gupta, Nikhil</creatorcontrib><description>Increasing usage of Digital Manufacturing (DM) in safety-critical domains is increasing attention on the cybersecurity of the manufacturing process, as malicious third parties might aim to introduce defects in digital designs. In general, the DM process involves creating a digital object (as CAD files) before using a slicer program to convert the models into printing instructions (e.g. g-code) suitable for the target printer. As the g-code is an intermediate machine format, malicious edits may be difficult to detect, especially when the golden (original) models are not available to the manufacturer. In this work we aim to quantify this hypothesis through a red-team/blue-team case study, whereby the red-team aims to introduce subtle defects that would impact the properties (strengths) of the 3D printed parts, and the blue-team aims to detect these modifications in the absence of the golden models. The case study had two sets of models, the first with 180 designs (with 2 compromised using 2 methods) and the second with 4320 designs (with 60 compromised using 6 methods). Using statistical modelling and machine learning (ML), the blue-team was able to detect all the compromises in the first set of data, and 50 of the compromises in the second.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2111.12746</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Case studies ; Computer Science - Cryptography and Security ; Cybersecurity ; Defects ; G codes ; Machine learning ; Manufacturing ; Safety critical ; Statistical methods ; Statistical models ; Three dimensional printing</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/LES.2021.3129108$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.12746$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Beckwith, Caleb</creatorcontrib><creatorcontrib>Naicker, Harsh Sankar</creatorcontrib><creatorcontrib>Mehta, Svara</creatorcontrib><creatorcontrib>Udupa, Viba R</creatorcontrib><creatorcontrib>Nghia Tri Nim</creatorcontrib><creatorcontrib>Gadre, Varun</creatorcontrib><creatorcontrib>Hammond Pearce</creatorcontrib><creatorcontrib>Mac, Gary</creatorcontrib><creatorcontrib>Gupta, Nikhil</creatorcontrib><title>Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files</title><title>arXiv.org</title><description>Increasing usage of Digital Manufacturing (DM) in safety-critical domains is increasing attention on the cybersecurity of the manufacturing process, as malicious third parties might aim to introduce defects in digital designs. In general, the DM process involves creating a digital object (as CAD files) before using a slicer program to convert the models into printing instructions (e.g. g-code) suitable for the target printer. As the g-code is an intermediate machine format, malicious edits may be difficult to detect, especially when the golden (original) models are not available to the manufacturer. In this work we aim to quantify this hypothesis through a red-team/blue-team case study, whereby the red-team aims to introduce subtle defects that would impact the properties (strengths) of the 3D printed parts, and the blue-team aims to detect these modifications in the absence of the golden models. The case study had two sets of models, the first with 180 designs (with 2 compromised using 2 methods) and the second with 4320 designs (with 60 compromised using 6 methods). Using statistical modelling and machine learning (ML), the blue-team was able to detect all the compromises in the first set of data, and 50 of the compromises in the second.</description><subject>Case studies</subject><subject>Computer Science - Cryptography and Security</subject><subject>Cybersecurity</subject><subject>Defects</subject><subject>G codes</subject><subject>Machine learning</subject><subject>Manufacturing</subject><subject>Safety critical</subject><subject>Statistical methods</subject><subject>Statistical models</subject><subject>Three dimensional printing</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkMtOwzAQRS0kJKrSD2CFJdYp9viRhF1V-kAqsKBbFE0SG7mEpMRO1f49SctqRjp3Zu5cQu44m8pEKfaI7dEdpsA5n3KIpb4iIxCCR4kEuCET73eMMdAxKCVG5PPNmLIy1NUU6RpPPmDx_USfTTBFcPUX_ejy0PNXrFzhms7TRemCp6Ghs7Lv3GFgdWexCF07DKyioikNXbrK-FtybbHyZvJfx2S7XGzn62jzvnqZzzYRKtCRlSlwJpOc6SJlSuRCgtbCxohWKFPqhCPjQnGbGpamIpcKU7AGsSeCx2JM7i9rz79n-9b9YHvKhgyycwa94uGi2LfNb2d8yHZN19a9pww0kyAT0V_8A3NAXUI</recordid><startdate>20211124</startdate><enddate>20211124</enddate><creator>Beckwith, Caleb</creator><creator>Naicker, Harsh Sankar</creator><creator>Mehta, Svara</creator><creator>Udupa, Viba R</creator><creator>Nghia Tri Nim</creator><creator>Gadre, Varun</creator><creator>Hammond Pearce</creator><creator>Mac, Gary</creator><creator>Gupta, Nikhil</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211124</creationdate><title>Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files</title><author>Beckwith, Caleb ; Naicker, Harsh Sankar ; Mehta, Svara ; Udupa, Viba R ; Nghia Tri Nim ; Gadre, Varun ; Hammond Pearce ; Mac, Gary ; Gupta, Nikhil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a526-f4921048b06c9053b342663f7aaf35ed681a01351f9e0993b45a92feaa6813173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Case studies</topic><topic>Computer Science - Cryptography and Security</topic><topic>Cybersecurity</topic><topic>Defects</topic><topic>G codes</topic><topic>Machine learning</topic><topic>Manufacturing</topic><topic>Safety critical</topic><topic>Statistical methods</topic><topic>Statistical models</topic><topic>Three dimensional printing</topic><toplevel>online_resources</toplevel><creatorcontrib>Beckwith, Caleb</creatorcontrib><creatorcontrib>Naicker, Harsh Sankar</creatorcontrib><creatorcontrib>Mehta, Svara</creatorcontrib><creatorcontrib>Udupa, Viba R</creatorcontrib><creatorcontrib>Nghia Tri Nim</creatorcontrib><creatorcontrib>Gadre, Varun</creatorcontrib><creatorcontrib>Hammond Pearce</creatorcontrib><creatorcontrib>Mac, Gary</creatorcontrib><creatorcontrib>Gupta, Nikhil</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Beckwith, Caleb</au><au>Naicker, Harsh Sankar</au><au>Mehta, Svara</au><au>Udupa, Viba R</au><au>Nghia Tri Nim</au><au>Gadre, Varun</au><au>Hammond Pearce</au><au>Mac, Gary</au><au>Gupta, Nikhil</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files</atitle><jtitle>arXiv.org</jtitle><date>2021-11-24</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Increasing usage of Digital Manufacturing (DM) in safety-critical domains is increasing attention on the cybersecurity of the manufacturing process, as malicious third parties might aim to introduce defects in digital designs. In general, the DM process involves creating a digital object (as CAD files) before using a slicer program to convert the models into printing instructions (e.g. g-code) suitable for the target printer. As the g-code is an intermediate machine format, malicious edits may be difficult to detect, especially when the golden (original) models are not available to the manufacturer. In this work we aim to quantify this hypothesis through a red-team/blue-team case study, whereby the red-team aims to introduce subtle defects that would impact the properties (strengths) of the 3D printed parts, and the blue-team aims to detect these modifications in the absence of the golden models. The case study had two sets of models, the first with 180 designs (with 2 compromised using 2 methods) and the second with 4320 designs (with 60 compromised using 6 methods). Using statistical modelling and machine learning (ML), the blue-team was able to detect all the compromises in the first set of data, and 50 of the compromises in the second.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2111.12746</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2111_12746 |
source | arXiv.org; Free E- Journals |
subjects | Case studies Computer Science - Cryptography and Security Cybersecurity Defects G codes Machine learning Manufacturing Safety critical Statistical methods Statistical models Three dimensional printing |
title | Needle in a Haystack: Detecting Subtle Malicious Edits to Additive Manufacturing G-code Files |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T22%3A57%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Needle%20in%20a%20Haystack:%20Detecting%20Subtle%20Malicious%20Edits%20to%20Additive%20Manufacturing%20G-code%20Files&rft.jtitle=arXiv.org&rft.au=Beckwith,%20Caleb&rft.date=2021-11-24&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2111.12746&rft_dat=%3Cproquest_arxiv%3E2604248366%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2604248366&rft_id=info:pmid/&rfr_iscdi=true |