Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift

Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&ESAR 20...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dey, Alexandre, Velay, Marc, Fauvelle, Jean-Philippe, Navers, Sylvain
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dey, Alexandre
Velay, Marc
Fauvelle, Jean-Philippe
Navers, Sylvain
description Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&ESAR 2018 and FIC 2019), we have associated deep neural networks auto-encoders for anomaly detection and graph-based events correlation to address major limitations in UEBA systems. This resulted in reduced false positive and false negative rates, improved alert explainability, while maintaining real-time performances and scalability. However, we did not address the natural evolution of behaviours through time, also known as concept drift. To maintain effective detection capabilities, an anomaly-based detection system must be continually trained, which opens a door to an adversary that can conduct the so-called "frog-boiling" attack by progressively distilling unnoticed attack traces inside the behavioural models until the complete attack is considered normal. In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise. We also present preliminary work on adversarial AI conducting deception attack, which, in term, will be used to help assess and improve the defense system. These defensive and offensive AI implement joint, continual and active learning, in a step that is necessary in assessing, validating and certifying AI-based defensive solutions.
doi_str_mv 10.48550/arxiv.2001.11821
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2001_11821</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2001_11821</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-cbf0d30204e36b5453e57a585c01d08cc58a0d516f8c3bd1413493b9a42634f3</originalsourceid><addsrcrecordid>eNotkLtOwzAYhbMwoMIDMPE_QBPs2E5TtqriUqkSA-zV7xs1Su3IdgI8EO9JUpjOcM75hq8obiipeCsEucP45caqJoRWlLY1vSx-Nno0MWF02MGYQJojji4MEbtSYjIatLHGJzca2Ozg0-UjfATn8xJU8Nn5Yfqh14Aqz5vOYPTOv98DDjmcME8EM2I3YHbBQ7AQgxxS9iYlyGGiK9PP1RL64FKYv2feRJ8b0NHZfFVcWOySuf7PRfH6-PC2fS73L0-77WZfYrOipZKWaEZqwg1rpOCCGbFC0QpFqCatUqJFogVtbKuY1JRTxtdMrpHXDeOWLYrbP-pZ06GP7oTx-zDrOpx1sV_HuWka</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift</title><source>arXiv.org</source><creator>Dey, Alexandre ; Velay, Marc ; Fauvelle, Jean-Philippe ; Navers, Sylvain</creator><creatorcontrib>Dey, Alexandre ; Velay, Marc ; Fauvelle, Jean-Philippe ; Navers, Sylvain</creatorcontrib><description>Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&amp;ESAR 2018 and FIC 2019), we have associated deep neural networks auto-encoders for anomaly detection and graph-based events correlation to address major limitations in UEBA systems. This resulted in reduced false positive and false negative rates, improved alert explainability, while maintaining real-time performances and scalability. However, we did not address the natural evolution of behaviours through time, also known as concept drift. To maintain effective detection capabilities, an anomaly-based detection system must be continually trained, which opens a door to an adversary that can conduct the so-called "frog-boiling" attack by progressively distilling unnoticed attack traces inside the behavioural models until the complete attack is considered normal. In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise. We also present preliminary work on adversarial AI conducting deception attack, which, in term, will be used to help assess and improve the defense system. These defensive and offensive AI implement joint, continual and active learning, in a step that is necessary in assessing, validating and certifying AI-based defensive solutions.</description><identifier>DOI: 10.48550/arxiv.2001.11821</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Cryptography and Security ; Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing</subject><creationdate>2020-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2001.11821$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2001.11821$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dey, Alexandre</creatorcontrib><creatorcontrib>Velay, Marc</creatorcontrib><creatorcontrib>Fauvelle, Jean-Philippe</creatorcontrib><creatorcontrib>Navers, Sylvain</creatorcontrib><title>Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift</title><description>Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&amp;ESAR 2018 and FIC 2019), we have associated deep neural networks auto-encoders for anomaly detection and graph-based events correlation to address major limitations in UEBA systems. This resulted in reduced false positive and false negative rates, improved alert explainability, while maintaining real-time performances and scalability. However, we did not address the natural evolution of behaviours through time, also known as concept drift. To maintain effective detection capabilities, an anomaly-based detection system must be continually trained, which opens a door to an adversary that can conduct the so-called "frog-boiling" attack by progressively distilling unnoticed attack traces inside the behavioural models until the complete attack is considered normal. In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise. We also present preliminary work on adversarial AI conducting deception attack, which, in term, will be used to help assess and improve the defense system. These defensive and offensive AI implement joint, continual and active learning, in a step that is necessary in assessing, validating and certifying AI-based defensive solutions.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkLtOwzAYhbMwoMIDMPE_QBPs2E5TtqriUqkSA-zV7xs1Su3IdgI8EO9JUpjOcM75hq8obiipeCsEucP45caqJoRWlLY1vSx-Nno0MWF02MGYQJojji4MEbtSYjIatLHGJzca2Ozg0-UjfATn8xJU8Nn5Yfqh14Aqz5vOYPTOv98DDjmcME8EM2I3YHbBQ7AQgxxS9iYlyGGiK9PP1RL64FKYv2feRJ8b0NHZfFVcWOySuf7PRfH6-PC2fS73L0-77WZfYrOipZKWaEZqwg1rpOCCGbFC0QpFqCatUqJFogVtbKuY1JRTxtdMrpHXDeOWLYrbP-pZ06GP7oTx-zDrOpx1sV_HuWka</recordid><startdate>20200113</startdate><enddate>20200113</enddate><creator>Dey, Alexandre</creator><creator>Velay, Marc</creator><creator>Fauvelle, Jean-Philippe</creator><creator>Navers, Sylvain</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200113</creationdate><title>Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift</title><author>Dey, Alexandre ; Velay, Marc ; Fauvelle, Jean-Philippe ; Navers, Sylvain</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-cbf0d30204e36b5453e57a585c01d08cc58a0d516f8c3bd1413493b9a42634f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Dey, Alexandre</creatorcontrib><creatorcontrib>Velay, Marc</creatorcontrib><creatorcontrib>Fauvelle, Jean-Philippe</creatorcontrib><creatorcontrib>Navers, Sylvain</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dey, Alexandre</au><au>Velay, Marc</au><au>Fauvelle, Jean-Philippe</au><au>Navers, Sylvain</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift</atitle><date>2020-01-13</date><risdate>2020</risdate><abstract>Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&amp;ESAR 2018 and FIC 2019), we have associated deep neural networks auto-encoders for anomaly detection and graph-based events correlation to address major limitations in UEBA systems. This resulted in reduced false positive and false negative rates, improved alert explainability, while maintaining real-time performances and scalability. However, we did not address the natural evolution of behaviours through time, also known as concept drift. To maintain effective detection capabilities, an anomaly-based detection system must be continually trained, which opens a door to an adversary that can conduct the so-called "frog-boiling" attack by progressively distilling unnoticed attack traces inside the behavioural models until the complete attack is considered normal. In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise. We also present preliminary work on adversarial AI conducting deception attack, which, in term, will be used to help assess and improve the defense system. These defensive and offensive AI implement joint, continual and active learning, in a step that is necessary in assessing, validating and certifying AI-based defensive solutions.</abstract><doi>10.48550/arxiv.2001.11821</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2001.11821
ispartof
issn
language eng
recordid cdi_arxiv_primary_2001_11821
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Cryptography and Security
Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
title Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T13%3A59%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20vs%20behavioural-based%20defensive%20AI%20with%20joint,%20continual%20and%20active%20learning:%20automated%20evaluation%20of%20robustness%20to%20deception,%20poisoning%20and%20concept%20drift&rft.au=Dey,%20Alexandre&rft.date=2020-01-13&rft_id=info:doi/10.48550/arxiv.2001.11821&rft_dat=%3Carxiv_GOX%3E2001_11821%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true