Introduction to LLM vulnerabilities
Introduction to LLM vulnerabilities This introductory course on vulnerabilities for Large Language Models (LLMs) and language models in general. It provides a deep dive into the practical applications of large language models (LLMs) using Azure's AI services. Upon completion, learners will be a...
Gespeichert in:
Weitere Verfasser: | |
---|---|
Format: | Elektronisch Video |
Sprache: | English |
Veröffentlicht: |
[Place of publication not identified]
Pragmatic AI Solutions
2024
|
Ausgabe: | [First edition]. |
Schlagworte: | |
Online-Zugang: | lizenzpflichtig |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
MARC
LEADER | 00000ngm a22000002 4500 | ||
---|---|---|---|
001 | ZDB-30-ORH-103620613 | ||
003 | DE-627-1 | ||
005 | 20240603113654.0 | ||
006 | m o | | | ||
007 | cr uuu---uuuuu | ||
008 | 240603s2024 xx ||| |o o ||eng c | ||
035 | |a (DE-627-1)103620613 | ||
035 | |a (DE-599)KEP103620613 | ||
035 | |a (ORHE)28562236VIDEOPAIML | ||
035 | |a (DE-627-1)103620613 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | |a 006.3/5 |2 23/eng/20240422 | |
245 | 1 | 0 | |a Introduction to LLM vulnerabilities |
250 | |a [First edition]. | ||
264 | 1 | |a [Place of publication not identified] |b Pragmatic AI Solutions |c 2024 | |
300 | |a 1 online resource (1 video file (1 hr., 26 min.)) |b sound, color. | ||
336 | |a zweidimensionales bewegtes Bild |b tdi |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Online resource; title from title details screen (O'Reilly, viewed April 22, 2024) | ||
520 | |a Introduction to LLM vulnerabilities This introductory course on vulnerabilities for Large Language Models (LLMs) and language models in general. It provides a deep dive into the practical applications of large language models (LLMs) using Azure's AI services. Upon completion, learners will be able to: Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Analyze the potential benefits and limitations of using pre-trained LLMs Develop strategies for mitigating risks and ethical considerations when deploying LLM-powered applications. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. You will learn how to secure your large language model (LLM) applications by addressing potential vulnerabilities. You will explore strategies to mitigate risks from insecure plugin design, including proper input validation and sanitization. Additionally, you will discover techniques to protect against sensitive information disclosure, such as using a redaction service to remove personally identifiable data from prompts and model responses. Finally, you will learn how to actively monitor your application dependencies for security updates and vulnerabilities, ensuring your system remains secure over time. Week 1: Foundations of Language Models This week you will get a brief overview of LLMs and how do they work Learning Objectives Analyze common types of generative applications and their architectures, including multi-model applications, and understand their challenges and benefits. Explain the functioning of a multi-model application, including the role of the framework and specialized machine learning models. Identify the advantages of smaller, specialized models in terms of resource usage, interaction speed, and deployment agility. Compare and contrast different generative AI application types, such as API-based, embedded models, and multi-model applications, and understand their use cases and challenges. Recognize the importance of large language models in various real-world applications, including text-based chatting, customer service, content creation, and daily tasks. Evaluate the benefits and drawbacks of large language models, considering aspects like accuracy, privacy, and potential misuse. Understand the basics of tokenization, indexing, and probability machines in the context of large language models. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Week 2: Language Model Vulnerabilities This week focuses on model-based vulnerabilities that you can explore with prompts. Learning Objectives Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Identify and demonstrate insecure output handling in large language models, and understand the potential security threats and attack vectors associated with it. Understand prompt injection and its implications for large language models, including how certain applications define the initial behavior of these models and how to exploit implicit system prompts. Recognize model theft vulnerabilities and understand how handling and access to system components can impact model security, particularly in the context of dynamically loaded models from external sources. Week 3: System vulnerabilities This week you will learn how to deal with environments and system-based vulnerabilities as they relate to LLMs. Learning Objectives Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. Explain the potential risks of sensitive information disclosure in large language models and implement measures to redact personal identifiable information using HTTP APIs and regular expressions. Monitor and update dependencies in large language model applications to prevent potential security vulnerabilities and automate the process using tools like GitHub's Dependabot. Evaluate application vulnerabilities based on the programming language and framework, and implement measures to prevent potential security threats. Week 4: Other types of vulnerabilities Learning Objectives Identify potential security threats and vulnerabilities associated with large and small language models. Implement strategies to prevent security situations and guard against making environments more secure. Recognize the concept of excessive agency in large language models and its potential impacts on functionality. Explain the denial of service threat for large language models and describe methods to guard against API misuse.% About your instructor Alfredo Deza has over a decade of experience as a Software Engineer doing DevOps, automation, and scalable system architecture. Before getting into technology he participated in the 2004 Olympic Games and was the first-ever World Champion in High Jump representing Peru. He currently works in Developer Relations at Microsoft and is an Adjunct Professor at Duke University teaching Machine Learning, Cloud Computing, Data Engineering, Python, and Rust. With Alfredo's guidance, you will gain the knowledge and skills to understand and work with vulnerabilities within language models. Resources Introduction to Generative AI Responsible Generative AI and Local LLMS Practical MLOps book. | ||
650 | 0 | |a Natural language processing (Computer science) | |
650 | 0 | |a Artificial intelligence | |
650 | 4 | |a Traitement automatique des langues naturelles | |
650 | 4 | |a Intelligence artificielle | |
650 | 4 | |a artificial intelligence | |
650 | 4 | |a Instructional films | |
650 | 4 | |a Nonfiction films | |
650 | 4 | |a Internet videos | |
650 | 4 | |a Films de formation | |
650 | 4 | |a Films autres que de fiction | |
650 | 4 | |a Vidéos sur Internet | |
700 | 1 | |a Deza, Alfredo |e MitwirkendeR |4 ctb | |
710 | 2 | |a Pragmatic AI Solutions (Firm), |e Verlag |4 pbl | |
856 | 4 | 0 | |l TUM01 |p ZDB-30-ORH |q TUM_PDA_ORH |u https://learning.oreilly.com/library/view/-/28562236VIDEOPAIML/?ar |m X:ORHE |x Aggregator |z lizenzpflichtig |3 Volltext |
912 | |a ZDB-30-ORH | ||
935 | |c vide | ||
951 | |a BO | ||
912 | |a ZDB-30-ORH | ||
049 | |a DE-91 |
Datensatz im Suchindex
DE-BY-TUM_katkey | ZDB-30-ORH-103620613 |
---|---|
_version_ | 1818767370056892417 |
adam_text | |
any_adam_object | |
author2 | Deza, Alfredo |
author2_role | ctb |
author2_variant | a d ad |
author_facet | Deza, Alfredo |
building | Verbundindex |
bvnumber | localTUM |
collection | ZDB-30-ORH |
ctrlnum | (DE-627-1)103620613 (DE-599)KEP103620613 (ORHE)28562236VIDEOPAIML |
dewey-full | 006.3/5 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3/5 |
dewey-search | 006.3/5 |
dewey-sort | 16.3 15 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
edition | [First edition]. |
format | Electronic Video |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>07799ngm a22004932 4500</leader><controlfield tag="001">ZDB-30-ORH-103620613</controlfield><controlfield tag="003">DE-627-1</controlfield><controlfield tag="005">20240603113654.0</controlfield><controlfield tag="006">m o | | </controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240603s2024 xx ||| |o o ||eng c</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)103620613</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP103620613</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ORHE)28562236VIDEOPAIML</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627-1)103620613</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/5</subfield><subfield code="2">23/eng/20240422</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Introduction to LLM vulnerabilities</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">[First edition].</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[Place of publication not identified]</subfield><subfield code="b">Pragmatic AI Solutions</subfield><subfield code="c">2024</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (1 video file (1 hr., 26 min.))</subfield><subfield code="b">sound, color.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">zweidimensionales bewegtes Bild</subfield><subfield code="b">tdi</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Online resource; title from title details screen (O'Reilly, viewed April 22, 2024)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Introduction to LLM vulnerabilities This introductory course on vulnerabilities for Large Language Models (LLMs) and language models in general. It provides a deep dive into the practical applications of large language models (LLMs) using Azure's AI services. Upon completion, learners will be able to: Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Analyze the potential benefits and limitations of using pre-trained LLMs Develop strategies for mitigating risks and ethical considerations when deploying LLM-powered applications. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. You will learn how to secure your large language model (LLM) applications by addressing potential vulnerabilities. You will explore strategies to mitigate risks from insecure plugin design, including proper input validation and sanitization. Additionally, you will discover techniques to protect against sensitive information disclosure, such as using a redaction service to remove personally identifiable data from prompts and model responses. Finally, you will learn how to actively monitor your application dependencies for security updates and vulnerabilities, ensuring your system remains secure over time. Week 1: Foundations of Language Models This week you will get a brief overview of LLMs and how do they work Learning Objectives Analyze common types of generative applications and their architectures, including multi-model applications, and understand their challenges and benefits. Explain the functioning of a multi-model application, including the role of the framework and specialized machine learning models. Identify the advantages of smaller, specialized models in terms of resource usage, interaction speed, and deployment agility. Compare and contrast different generative AI application types, such as API-based, embedded models, and multi-model applications, and understand their use cases and challenges. Recognize the importance of large language models in various real-world applications, including text-based chatting, customer service, content creation, and daily tasks. Evaluate the benefits and drawbacks of large language models, considering aspects like accuracy, privacy, and potential misuse. Understand the basics of tokenization, indexing, and probability machines in the context of large language models. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Week 2: Language Model Vulnerabilities This week focuses on model-based vulnerabilities that you can explore with prompts. Learning Objectives Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Identify and demonstrate insecure output handling in large language models, and understand the potential security threats and attack vectors associated with it. Understand prompt injection and its implications for large language models, including how certain applications define the initial behavior of these models and how to exploit implicit system prompts. Recognize model theft vulnerabilities and understand how handling and access to system components can impact model security, particularly in the context of dynamically loaded models from external sources. Week 3: System vulnerabilities This week you will learn how to deal with environments and system-based vulnerabilities as they relate to LLMs. Learning Objectives Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. Explain the potential risks of sensitive information disclosure in large language models and implement measures to redact personal identifiable information using HTTP APIs and regular expressions. Monitor and update dependencies in large language model applications to prevent potential security vulnerabilities and automate the process using tools like GitHub's Dependabot. Evaluate application vulnerabilities based on the programming language and framework, and implement measures to prevent potential security threats. Week 4: Other types of vulnerabilities Learning Objectives Identify potential security threats and vulnerabilities associated with large and small language models. Implement strategies to prevent security situations and guard against making environments more secure. Recognize the concept of excessive agency in large language models and its potential impacts on functionality. Explain the denial of service threat for large language models and describe methods to guard against API misuse.% About your instructor Alfredo Deza has over a decade of experience as a Software Engineer doing DevOps, automation, and scalable system architecture. Before getting into technology he participated in the 2004 Olympic Games and was the first-ever World Champion in High Jump representing Peru. He currently works in Developer Relations at Microsoft and is an Adjunct Professor at Duke University teaching Machine Learning, Cloud Computing, Data Engineering, Python, and Rust. With Alfredo's guidance, you will gain the knowledge and skills to understand and work with vulnerabilities within language models. Resources Introduction to Generative AI Responsible Generative AI and Local LLMS Practical MLOps book.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Natural language processing (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traitement automatique des langues naturelles</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Intelligence artificielle</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Instructional films</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Nonfiction films</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Internet videos</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Films de formation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Films autres que de fiction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vidéos sur Internet</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Deza, Alfredo</subfield><subfield code="e">MitwirkendeR</subfield><subfield code="4">ctb</subfield></datafield><datafield tag="710" ind1="2" ind2=" "><subfield code="a">Pragmatic AI Solutions (Firm),</subfield><subfield code="e">Verlag</subfield><subfield code="4">pbl</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">TUM01</subfield><subfield code="p">ZDB-30-ORH</subfield><subfield code="q">TUM_PDA_ORH</subfield><subfield code="u">https://learning.oreilly.com/library/view/-/28562236VIDEOPAIML/?ar</subfield><subfield code="m">X:ORHE</subfield><subfield code="x">Aggregator</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="935" ind1=" " ind2=" "><subfield code="c">vide</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">BO</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-ORH</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91</subfield></datafield></record></collection> |
id | ZDB-30-ORH-103620613 |
illustrated | Not Illustrated |
indexdate | 2024-12-18T08:48:48Z |
institution | BVB |
language | English |
open_access_boolean | |
owner | DE-91 DE-BY-TUM |
owner_facet | DE-91 DE-BY-TUM |
physical | 1 online resource (1 video file (1 hr., 26 min.)) sound, color. |
psigel | ZDB-30-ORH |
publishDate | 2024 |
publishDateSearch | 2024 |
publishDateSort | 2024 |
publisher | Pragmatic AI Solutions |
record_format | marc |
spelling | Introduction to LLM vulnerabilities [First edition]. [Place of publication not identified] Pragmatic AI Solutions 2024 1 online resource (1 video file (1 hr., 26 min.)) sound, color. zweidimensionales bewegtes Bild tdi rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Online resource; title from title details screen (O'Reilly, viewed April 22, 2024) Introduction to LLM vulnerabilities This introductory course on vulnerabilities for Large Language Models (LLMs) and language models in general. It provides a deep dive into the practical applications of large language models (LLMs) using Azure's AI services. Upon completion, learners will be able to: Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Analyze the potential benefits and limitations of using pre-trained LLMs Develop strategies for mitigating risks and ethical considerations when deploying LLM-powered applications. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. You will learn how to secure your large language model (LLM) applications by addressing potential vulnerabilities. You will explore strategies to mitigate risks from insecure plugin design, including proper input validation and sanitization. Additionally, you will discover techniques to protect against sensitive information disclosure, such as using a redaction service to remove personally identifiable data from prompts and model responses. Finally, you will learn how to actively monitor your application dependencies for security updates and vulnerabilities, ensuring your system remains secure over time. Week 1: Foundations of Language Models This week you will get a brief overview of LLMs and how do they work Learning Objectives Analyze common types of generative applications and their architectures, including multi-model applications, and understand their challenges and benefits. Explain the functioning of a multi-model application, including the role of the framework and specialized machine learning models. Identify the advantages of smaller, specialized models in terms of resource usage, interaction speed, and deployment agility. Compare and contrast different generative AI application types, such as API-based, embedded models, and multi-model applications, and understand their use cases and challenges. Recognize the importance of large language models in various real-world applications, including text-based chatting, customer service, content creation, and daily tasks. Evaluate the benefits and drawbacks of large language models, considering aspects like accuracy, privacy, and potential misuse. Understand the basics of tokenization, indexing, and probability machines in the context of large language models. Describe the high-level process of creating a large language model, including data collection, cleaning, and training. Explain the role of security in large language models and recognize potential security vulnerabilities and attack vectors. Week 2: Language Model Vulnerabilities This week focuses on model-based vulnerabilities that you can explore with prompts. Learning Objectives Explain the concept of model replication or model shadowing as a potential attack vector in large language models, and describe methods to mitigate it through techniques like rate limiting and buffering. Identify and demonstrate insecure output handling in large language models, and understand the potential security threats and attack vectors associated with it. Understand prompt injection and its implications for large language models, including how certain applications define the initial behavior of these models and how to exploit implicit system prompts. Recognize model theft vulnerabilities and understand how handling and access to system components can impact model security, particularly in the context of dynamically loaded models from external sources. Week 3: System vulnerabilities This week you will learn how to deal with environments and system-based vulnerabilities as they relate to LLMs. Learning Objectives Identify insecure plugin designs in large language model software development kits (SDKs) that could lead to remote execution and implement strategies to secure plugins. Explain the potential risks of sensitive information disclosure in large language models and implement measures to redact personal identifiable information using HTTP APIs and regular expressions. Monitor and update dependencies in large language model applications to prevent potential security vulnerabilities and automate the process using tools like GitHub's Dependabot. Evaluate application vulnerabilities based on the programming language and framework, and implement measures to prevent potential security threats. Week 4: Other types of vulnerabilities Learning Objectives Identify potential security threats and vulnerabilities associated with large and small language models. Implement strategies to prevent security situations and guard against making environments more secure. Recognize the concept of excessive agency in large language models and its potential impacts on functionality. Explain the denial of service threat for large language models and describe methods to guard against API misuse.% About your instructor Alfredo Deza has over a decade of experience as a Software Engineer doing DevOps, automation, and scalable system architecture. Before getting into technology he participated in the 2004 Olympic Games and was the first-ever World Champion in High Jump representing Peru. He currently works in Developer Relations at Microsoft and is an Adjunct Professor at Duke University teaching Machine Learning, Cloud Computing, Data Engineering, Python, and Rust. With Alfredo's guidance, you will gain the knowledge and skills to understand and work with vulnerabilities within language models. Resources Introduction to Generative AI Responsible Generative AI and Local LLMS Practical MLOps book. Natural language processing (Computer science) Artificial intelligence Traitement automatique des langues naturelles Intelligence artificielle artificial intelligence Instructional films Nonfiction films Internet videos Films de formation Films autres que de fiction Vidéos sur Internet Deza, Alfredo MitwirkendeR ctb Pragmatic AI Solutions (Firm), Verlag pbl TUM01 ZDB-30-ORH TUM_PDA_ORH https://learning.oreilly.com/library/view/-/28562236VIDEOPAIML/?ar X:ORHE Aggregator lizenzpflichtig Volltext |
spellingShingle | Introduction to LLM vulnerabilities Natural language processing (Computer science) Artificial intelligence Traitement automatique des langues naturelles Intelligence artificielle artificial intelligence Instructional films Nonfiction films Internet videos Films de formation Films autres que de fiction Vidéos sur Internet |
title | Introduction to LLM vulnerabilities |
title_auth | Introduction to LLM vulnerabilities |
title_exact_search | Introduction to LLM vulnerabilities |
title_full | Introduction to LLM vulnerabilities |
title_fullStr | Introduction to LLM vulnerabilities |
title_full_unstemmed | Introduction to LLM vulnerabilities |
title_short | Introduction to LLM vulnerabilities |
title_sort | introduction to llm vulnerabilities |
topic | Natural language processing (Computer science) Artificial intelligence Traitement automatique des langues naturelles Intelligence artificielle artificial intelligence Instructional films Nonfiction films Internet videos Films de formation Films autres que de fiction Vidéos sur Internet |
topic_facet | Natural language processing (Computer science) Artificial intelligence Traitement automatique des langues naturelles Intelligence artificielle artificial intelligence Instructional films Nonfiction films Internet videos Films de formation Films autres que de fiction Vidéos sur Internet |
url | https://learning.oreilly.com/library/view/-/28562236VIDEOPAIML/?ar |
work_keys_str_mv | AT dezaalfredo introductiontollmvulnerabilities AT pragmaticaisolutionsfirm introductiontollmvulnerabilities |