MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework
AI-driven models are increasingly deployed in operational analytics solutions, for instance, in investigative journalism or the intelligence community. Current approaches face two primary challenges: ethical and privacy concerns, as well as difficulties in efficiently combining heterogeneous data so...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Fischer, Maximilian T Metz, Yannick Joos, Lucas Miller, Matthias Keim, Daniel A |
description | AI-driven models are increasingly deployed in operational analytics
solutions, for instance, in investigative journalism or the intelligence
community. Current approaches face two primary challenges: ethical and privacy
concerns, as well as difficulties in efficiently combining heterogeneous data
sources for multimodal analytics. To tackle the challenge of multimodal
analytics, we present MULTI-CASE, a holistic visual analytics framework
tailored towards ethics-aware and multimodal intelligence exploration, designed
in collaboration with domain experts. It leverages an equal joint agency
between human and AI to explore and assess heterogeneous information spaces,
checking and balancing automation through Visual Analytics. MULTI-CASE operates
on a fully-integrated data model and features type-specific analysis with
multiple linked components, including a combined search, annotated text view,
and graph-based analysis. Parts of the underlying entity detection are based on
a RoBERTa-based language model, which we tailored towards user requirements
through fine-tuning. An overarching knowledge exploration graph combines all
information streams, provides in-situ explanations, transparent source
attribution, and facilitates effective exploration. To assess our approach, we
conducted a comprehensive set of evaluations: We benchmarked the underlying
language model on relevant NER tasks, achieving state-of-the-art performance.
The demonstrator was assessed according to intelligence capability assessments,
while the methodology was evaluated according to ethics design guidelines. As a
case study, we present our framework in an investigative journalism setting,
supporting war crime investigations. Finally, we conduct a formative user
evaluation with domain experts in law enforcement. Our evaluations confirm that
our framework facilitates human agency and steering in security-sensitive
applications. |
doi_str_mv | 10.48550/arxiv.2401.01955 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_01955</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_01955</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-2ea8cf45bdb91b369ea027580dc4a0e03afecd0da3ce3b51662f04bd477a4a833</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4Bhyc2M4PWxSlECkVA-lWKfpsfykW-UFOSOHuoYXp6F2O9BByF_JApkrxB_Bfbg0iycOAh5lS1-Sw29dNxYr8tXykOW08jHM3-QE90zCjpeXy5szM4AQe6e6zX9wwWehpNa44L-4Ii1vxtxbse3fE0SDdehjwNPn3G3LVQT_j7f9uSLMtm-KZ1S9PVZHXDOJEsQghNZ1U2uos1CLOEHiUqJRbI4EjF9ChsdyCMCi0CuM46rjUViYJSEiF2JD7v9sLr_3wbgD_3Z6Z7YUpfgACw041</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework</title><source>arXiv.org</source><creator>Fischer, Maximilian T ; Metz, Yannick ; Joos, Lucas ; Miller, Matthias ; Keim, Daniel A</creator><creatorcontrib>Fischer, Maximilian T ; Metz, Yannick ; Joos, Lucas ; Miller, Matthias ; Keim, Daniel A</creatorcontrib><description>AI-driven models are increasingly deployed in operational analytics
solutions, for instance, in investigative journalism or the intelligence
community. Current approaches face two primary challenges: ethical and privacy
concerns, as well as difficulties in efficiently combining heterogeneous data
sources for multimodal analytics. To tackle the challenge of multimodal
analytics, we present MULTI-CASE, a holistic visual analytics framework
tailored towards ethics-aware and multimodal intelligence exploration, designed
in collaboration with domain experts. It leverages an equal joint agency
between human and AI to explore and assess heterogeneous information spaces,
checking and balancing automation through Visual Analytics. MULTI-CASE operates
on a fully-integrated data model and features type-specific analysis with
multiple linked components, including a combined search, annotated text view,
and graph-based analysis. Parts of the underlying entity detection are based on
a RoBERTa-based language model, which we tailored towards user requirements
through fine-tuning. An overarching knowledge exploration graph combines all
information streams, provides in-situ explanations, transparent source
attribution, and facilitates effective exploration. To assess our approach, we
conducted a comprehensive set of evaluations: We benchmarked the underlying
language model on relevant NER tasks, achieving state-of-the-art performance.
The demonstrator was assessed according to intelligence capability assessments,
while the methodology was evaluated according to ethics design guidelines. As a
case study, we present our framework in an investigative journalism setting,
supporting war crime investigations. Finally, we conduct a formative user
evaluation with domain experts in law enforcement. Our evaluations confirm that
our framework facilitates human agency and steering in security-sensitive
applications.</description><identifier>DOI: 10.48550/arxiv.2401.01955</identifier><language>eng</language><subject>Computer Science - Human-Computer Interaction ; Computer Science - Multimedia</subject><creationdate>2024-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.01955$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.01955$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fischer, Maximilian T</creatorcontrib><creatorcontrib>Metz, Yannick</creatorcontrib><creatorcontrib>Joos, Lucas</creatorcontrib><creatorcontrib>Miller, Matthias</creatorcontrib><creatorcontrib>Keim, Daniel A</creatorcontrib><title>MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework</title><description>AI-driven models are increasingly deployed in operational analytics
solutions, for instance, in investigative journalism or the intelligence
community. Current approaches face two primary challenges: ethical and privacy
concerns, as well as difficulties in efficiently combining heterogeneous data
sources for multimodal analytics. To tackle the challenge of multimodal
analytics, we present MULTI-CASE, a holistic visual analytics framework
tailored towards ethics-aware and multimodal intelligence exploration, designed
in collaboration with domain experts. It leverages an equal joint agency
between human and AI to explore and assess heterogeneous information spaces,
checking and balancing automation through Visual Analytics. MULTI-CASE operates
on a fully-integrated data model and features type-specific analysis with
multiple linked components, including a combined search, annotated text view,
and graph-based analysis. Parts of the underlying entity detection are based on
a RoBERTa-based language model, which we tailored towards user requirements
through fine-tuning. An overarching knowledge exploration graph combines all
information streams, provides in-situ explanations, transparent source
attribution, and facilitates effective exploration. To assess our approach, we
conducted a comprehensive set of evaluations: We benchmarked the underlying
language model on relevant NER tasks, achieving state-of-the-art performance.
The demonstrator was assessed according to intelligence capability assessments,
while the methodology was evaluated according to ethics design guidelines. As a
case study, we present our framework in an investigative journalism setting,
supporting war crime investigations. Finally, we conduct a formative user
evaluation with domain experts in law enforcement. Our evaluations confirm that
our framework facilitates human agency and steering in security-sensitive
applications.</description><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4Bhyc2M4PWxSlECkVA-lWKfpsfykW-UFOSOHuoYXp6F2O9BByF_JApkrxB_Bfbg0iycOAh5lS1-Sw29dNxYr8tXykOW08jHM3-QE90zCjpeXy5szM4AQe6e6zX9wwWehpNa44L-4Ii1vxtxbse3fE0SDdehjwNPn3G3LVQT_j7f9uSLMtm-KZ1S9PVZHXDOJEsQghNZ1U2uos1CLOEHiUqJRbI4EjF9ChsdyCMCi0CuM46rjUViYJSEiF2JD7v9sLr_3wbgD_3Z6Z7YUpfgACw041</recordid><startdate>20240103</startdate><enddate>20240103</enddate><creator>Fischer, Maximilian T</creator><creator>Metz, Yannick</creator><creator>Joos, Lucas</creator><creator>Miller, Matthias</creator><creator>Keim, Daniel A</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240103</creationdate><title>MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework</title><author>Fischer, Maximilian T ; Metz, Yannick ; Joos, Lucas ; Miller, Matthias ; Keim, Daniel A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-2ea8cf45bdb91b369ea027580dc4a0e03afecd0da3ce3b51662f04bd477a4a833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Fischer, Maximilian T</creatorcontrib><creatorcontrib>Metz, Yannick</creatorcontrib><creatorcontrib>Joos, Lucas</creatorcontrib><creatorcontrib>Miller, Matthias</creatorcontrib><creatorcontrib>Keim, Daniel A</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fischer, Maximilian T</au><au>Metz, Yannick</au><au>Joos, Lucas</au><au>Miller, Matthias</au><au>Keim, Daniel A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework</atitle><date>2024-01-03</date><risdate>2024</risdate><abstract>AI-driven models are increasingly deployed in operational analytics
solutions, for instance, in investigative journalism or the intelligence
community. Current approaches face two primary challenges: ethical and privacy
concerns, as well as difficulties in efficiently combining heterogeneous data
sources for multimodal analytics. To tackle the challenge of multimodal
analytics, we present MULTI-CASE, a holistic visual analytics framework
tailored towards ethics-aware and multimodal intelligence exploration, designed
in collaboration with domain experts. It leverages an equal joint agency
between human and AI to explore and assess heterogeneous information spaces,
checking and balancing automation through Visual Analytics. MULTI-CASE operates
on a fully-integrated data model and features type-specific analysis with
multiple linked components, including a combined search, annotated text view,
and graph-based analysis. Parts of the underlying entity detection are based on
a RoBERTa-based language model, which we tailored towards user requirements
through fine-tuning. An overarching knowledge exploration graph combines all
information streams, provides in-situ explanations, transparent source
attribution, and facilitates effective exploration. To assess our approach, we
conducted a comprehensive set of evaluations: We benchmarked the underlying
language model on relevant NER tasks, achieving state-of-the-art performance.
The demonstrator was assessed according to intelligence capability assessments,
while the methodology was evaluated according to ethics design guidelines. As a
case study, we present our framework in an investigative journalism setting,
supporting war crime investigations. Finally, we conduct a formative user
evaluation with domain experts in law enforcement. Our evaluations confirm that
our framework facilitates human agency and steering in security-sensitive
applications.</abstract><doi>10.48550/arxiv.2401.01955</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.01955 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_01955 |
source | arXiv.org |
subjects | Computer Science - Human-Computer Interaction Computer Science - Multimedia |
title | MULTI-CASE: A Transformer-based Ethics-aware Multimodal Investigative Intelligence Framework |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T17%3A53%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MULTI-CASE:%20A%20Transformer-based%20Ethics-aware%20Multimodal%20Investigative%20Intelligence%20Framework&rft.au=Fischer,%20Maximilian%20T&rft.date=2024-01-03&rft_id=info:doi/10.48550/arxiv.2401.01955&rft_dat=%3Carxiv_GOX%3E2401_01955%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |