LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance

Schema matching is a core data integration task, focusing on identifying correspondences among attributes of multiple schemata. Numerous algorithmic approaches were suggested for schema matching over the years, aiming at solving the task with as little human involvement as possible. Yet, humans are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Solomon, Matan, Genossar, Bar, Shraga, Roee, Gal, Avigdor
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Solomon, Matan
Genossar, Bar
Shraga, Roee
Gal, Avigdor
description Schema matching is a core data integration task, focusing on identifying correspondences among attributes of multiple schemata. Numerous algorithmic approaches were suggested for schema matching over the years, aiming at solving the task with as little human involvement as possible. Yet, humans are still required in the loop -- to validate algorithms and to produce ground truth data for algorithms to be trained against. In recent years, a new research direction investigates the capabilities and behavior of humans while performing matching tasks. Previous works utilized this knowledge to predict, and even improve, the performance of human matchers. In this work, we continue this line of research by suggesting a novel measure to evaluate the performance of human matchers, based on calibration, a common meta-cognition measure. The proposed measure enables detailed analysis of various factors of the behavior of human matchers and their relation to human performance. Such analysis can be further utilized to develop heuristics and methods to better asses and improve the annotation quality.
doi_str_mv 10.48550/arxiv.2308.01761
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2308_01761</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308_01761</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-ab4ff9b1168476756cc3a2a8b5d5505146760ba5bc4965c0614dc68ccf519b5d3</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3pAbV8ACf8Aw52Y68dblUEtFJQOJRztHZtsJSmyE0qytfXFA6jkWZGq32E3AleSKMUf8D0HU_FsuSm4EKDuCFt077Xj7TxePKsHbKmkdXYR5twjIeBvno8TsnTcEh0NWB__onDB11Pe8wdju7TJ_rmU65z4vyCzAL2R3_773OyfX7a1mvWtC-betUwBC0YWhlCZYUAIzVoBc6VuERj1S6_qYQEDdyisk5WoBwHIXcOjHNBiSqPyjm5_zt7Jeq-UtxjOne_ZN2VrLwAG45IFw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance</title><source>arXiv.org</source><creator>Solomon, Matan ; Genossar, Bar ; Shraga, Roee ; Gal, Avigdor</creator><creatorcontrib>Solomon, Matan ; Genossar, Bar ; Shraga, Roee ; Gal, Avigdor</creatorcontrib><description>Schema matching is a core data integration task, focusing on identifying correspondences among attributes of multiple schemata. Numerous algorithmic approaches were suggested for schema matching over the years, aiming at solving the task with as little human involvement as possible. Yet, humans are still required in the loop -- to validate algorithms and to produce ground truth data for algorithms to be trained against. In recent years, a new research direction investigates the capabilities and behavior of humans while performing matching tasks. Previous works utilized this knowledge to predict, and even improve, the performance of human matchers. In this work, we continue this line of research by suggesting a novel measure to evaluate the performance of human matchers, based on calibration, a common meta-cognition measure. The proposed measure enables detailed analysis of various factors of the behavior of human matchers and their relation to human performance. Such analysis can be further utilized to develop heuristics and methods to better asses and improve the annotation quality.</description><identifier>DOI: 10.48550/arxiv.2308.01761</identifier><language>eng</language><subject>Computer Science - Databases ; Computer Science - Human-Computer Interaction</subject><creationdate>2023-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2308.01761$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.01761$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Solomon, Matan</creatorcontrib><creatorcontrib>Genossar, Bar</creatorcontrib><creatorcontrib>Shraga, Roee</creatorcontrib><creatorcontrib>Gal, Avigdor</creatorcontrib><title>LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance</title><description>Schema matching is a core data integration task, focusing on identifying correspondences among attributes of multiple schemata. Numerous algorithmic approaches were suggested for schema matching over the years, aiming at solving the task with as little human involvement as possible. Yet, humans are still required in the loop -- to validate algorithms and to produce ground truth data for algorithms to be trained against. In recent years, a new research direction investigates the capabilities and behavior of humans while performing matching tasks. Previous works utilized this knowledge to predict, and even improve, the performance of human matchers. In this work, we continue this line of research by suggesting a novel measure to evaluate the performance of human matchers, based on calibration, a common meta-cognition measure. The proposed measure enables detailed analysis of various factors of the behavior of human matchers and their relation to human performance. Such analysis can be further utilized to develop heuristics and methods to better asses and improve the annotation quality.</description><subject>Computer Science - Databases</subject><subject>Computer Science - Human-Computer Interaction</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3pAbV8ACf8Aw52Y68dblUEtFJQOJRztHZtsJSmyE0qytfXFA6jkWZGq32E3AleSKMUf8D0HU_FsuSm4EKDuCFt077Xj7TxePKsHbKmkdXYR5twjIeBvno8TsnTcEh0NWB__onDB11Pe8wdju7TJ_rmU65z4vyCzAL2R3_773OyfX7a1mvWtC-betUwBC0YWhlCZYUAIzVoBc6VuERj1S6_qYQEDdyisk5WoBwHIXcOjHNBiSqPyjm5_zt7Jeq-UtxjOne_ZN2VrLwAG45IFw</recordid><startdate>20230803</startdate><enddate>20230803</enddate><creator>Solomon, Matan</creator><creator>Genossar, Bar</creator><creator>Shraga, Roee</creator><creator>Gal, Avigdor</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230803</creationdate><title>LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance</title><author>Solomon, Matan ; Genossar, Bar ; Shraga, Roee ; Gal, Avigdor</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-ab4ff9b1168476756cc3a2a8b5d5505146760ba5bc4965c0614dc68ccf519b5d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Databases</topic><topic>Computer Science - Human-Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Solomon, Matan</creatorcontrib><creatorcontrib>Genossar, Bar</creatorcontrib><creatorcontrib>Shraga, Roee</creatorcontrib><creatorcontrib>Gal, Avigdor</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Solomon, Matan</au><au>Genossar, Bar</au><au>Shraga, Roee</au><au>Gal, Avigdor</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance</atitle><date>2023-08-03</date><risdate>2023</risdate><abstract>Schema matching is a core data integration task, focusing on identifying correspondences among attributes of multiple schemata. Numerous algorithmic approaches were suggested for schema matching over the years, aiming at solving the task with as little human involvement as possible. Yet, humans are still required in the loop -- to validate algorithms and to produce ground truth data for algorithms to be trained against. In recent years, a new research direction investigates the capabilities and behavior of humans while performing matching tasks. Previous works utilized this knowledge to predict, and even improve, the performance of human matchers. In this work, we continue this line of research by suggesting a novel measure to evaluate the performance of human matchers, based on calibration, a common meta-cognition measure. The proposed measure enables detailed analysis of various factors of the behavior of human matchers and their relation to human performance. Such analysis can be further utilized to develop heuristics and methods to better asses and improve the annotation quality.</abstract><doi>10.48550/arxiv.2308.01761</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2308.01761
ispartof
issn
language eng
recordid cdi_arxiv_primary_2308_01761
source arXiv.org
subjects Computer Science - Databases
Computer Science - Human-Computer Interaction
title LOUC: Leave-One-Out-Calibration Measure for Analyzing Human Matcher Performance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T09%3A43%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LOUC:%20Leave-One-Out-Calibration%20Measure%20for%20Analyzing%20Human%20Matcher%20Performance&rft.au=Solomon,%20Matan&rft.date=2023-08-03&rft_id=info:doi/10.48550/arxiv.2308.01761&rft_dat=%3Carxiv_GOX%3E2308_01761%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true