Automatic Detection and Diagnosis of Biased Online Experiments

We have seen a massive growth of online experiments at LinkedIn, and in industry at large. It is now more important than ever to create an intelligent A/B platform that can truly democratize A/B testing by allowing everyone to make quality decisions, regardless of their skillset. With the tremendous...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Nanyu, Liu, Min, Xu, Ya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Nanyu
Liu, Min
Xu, Ya
description We have seen a massive growth of online experiments at LinkedIn, and in industry at large. It is now more important than ever to create an intelligent A/B platform that can truly democratize A/B testing by allowing everyone to make quality decisions, regardless of their skillset. With the tremendous knowledge base created around experimentation, we are able to mine through historical data, and discover the most common causes for biased experiments. In this paper, we share four of such common causes, and how we build into our A/B testing platform the automatic detection and diagnosis of such root causes. These root causes range from design-imposed bias, self-selection bias, novelty effect and trigger-day effect. We will discuss in detail what each bias is and the scalable algorithm we developed to detect the bias. Surfacing up the existence and root cause of bias automatically for every experiment is an important milestone towards intelligent A/B testing.
doi_str_mv 10.48550/arxiv.1808.00114
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1808_00114</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1808_00114</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-10b5b63e1f88ec99972a86acc56757c1616f780e468365ce5ea5b56599fc746b3</originalsourceid><addsrcrecordid>eNotzzFOAzEQhWE3FChwACp8gV1s1jO2G6SQBIgUKU361awzRpYSb7Q2KNweCFSv-58-Ie60ao0DUA80ndNnq51yrVJam2vxNP-o45FqCnLJlUNNY5aU93KZ6D2PJRU5RvmcqPBebvMhZZar84mndORcy424inQofPu_M7F7We0Wb81m-7pezDcNoTWNVgMM2LGOznHw3ttHckghAFqwQaPGaJ1ig65DCAxMMACC9zFYg0M3E_d_2QugP_280_TV_0L6C6T7BjA5QmM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Automatic Detection and Diagnosis of Biased Online Experiments</title><source>arXiv.org</source><creator>Chen, Nanyu ; Liu, Min ; Xu, Ya</creator><creatorcontrib>Chen, Nanyu ; Liu, Min ; Xu, Ya</creatorcontrib><description>We have seen a massive growth of online experiments at LinkedIn, and in industry at large. It is now more important than ever to create an intelligent A/B platform that can truly democratize A/B testing by allowing everyone to make quality decisions, regardless of their skillset. With the tremendous knowledge base created around experimentation, we are able to mine through historical data, and discover the most common causes for biased experiments. In this paper, we share four of such common causes, and how we build into our A/B testing platform the automatic detection and diagnosis of such root causes. These root causes range from design-imposed bias, self-selection bias, novelty effect and trigger-day effect. We will discuss in detail what each bias is and the scalable algorithm we developed to detect the bias. Surfacing up the existence and root cause of bias automatically for every experiment is an important milestone towards intelligent A/B testing.</description><identifier>DOI: 10.48550/arxiv.1808.00114</identifier><language>eng</language><subject>Statistics - Applications</subject><creationdate>2018-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1808.00114$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1808.00114$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Nanyu</creatorcontrib><creatorcontrib>Liu, Min</creatorcontrib><creatorcontrib>Xu, Ya</creatorcontrib><title>Automatic Detection and Diagnosis of Biased Online Experiments</title><description>We have seen a massive growth of online experiments at LinkedIn, and in industry at large. It is now more important than ever to create an intelligent A/B platform that can truly democratize A/B testing by allowing everyone to make quality decisions, regardless of their skillset. With the tremendous knowledge base created around experimentation, we are able to mine through historical data, and discover the most common causes for biased experiments. In this paper, we share four of such common causes, and how we build into our A/B testing platform the automatic detection and diagnosis of such root causes. These root causes range from design-imposed bias, self-selection bias, novelty effect and trigger-day effect. We will discuss in detail what each bias is and the scalable algorithm we developed to detect the bias. Surfacing up the existence and root cause of bias automatically for every experiment is an important milestone towards intelligent A/B testing.</description><subject>Statistics - Applications</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzzFOAzEQhWE3FChwACp8gV1s1jO2G6SQBIgUKU361awzRpYSb7Q2KNweCFSv-58-Ie60ao0DUA80ndNnq51yrVJam2vxNP-o45FqCnLJlUNNY5aU93KZ6D2PJRU5RvmcqPBebvMhZZar84mndORcy424inQofPu_M7F7We0Wb81m-7pezDcNoTWNVgMM2LGOznHw3ttHckghAFqwQaPGaJ1ig65DCAxMMACC9zFYg0M3E_d_2QugP_280_TV_0L6C6T7BjA5QmM</recordid><startdate>20180731</startdate><enddate>20180731</enddate><creator>Chen, Nanyu</creator><creator>Liu, Min</creator><creator>Xu, Ya</creator><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180731</creationdate><title>Automatic Detection and Diagnosis of Biased Online Experiments</title><author>Chen, Nanyu ; Liu, Min ; Xu, Ya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-10b5b63e1f88ec99972a86acc56757c1616f780e468365ce5ea5b56599fc746b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Statistics - Applications</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Nanyu</creatorcontrib><creatorcontrib>Liu, Min</creatorcontrib><creatorcontrib>Xu, Ya</creatorcontrib><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Nanyu</au><au>Liu, Min</au><au>Xu, Ya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic Detection and Diagnosis of Biased Online Experiments</atitle><date>2018-07-31</date><risdate>2018</risdate><abstract>We have seen a massive growth of online experiments at LinkedIn, and in industry at large. It is now more important than ever to create an intelligent A/B platform that can truly democratize A/B testing by allowing everyone to make quality decisions, regardless of their skillset. With the tremendous knowledge base created around experimentation, we are able to mine through historical data, and discover the most common causes for biased experiments. In this paper, we share four of such common causes, and how we build into our A/B testing platform the automatic detection and diagnosis of such root causes. These root causes range from design-imposed bias, self-selection bias, novelty effect and trigger-day effect. We will discuss in detail what each bias is and the scalable algorithm we developed to detect the bias. Surfacing up the existence and root cause of bias automatically for every experiment is an important milestone towards intelligent A/B testing.</abstract><doi>10.48550/arxiv.1808.00114</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1808.00114
ispartof
issn
language eng
recordid cdi_arxiv_primary_1808_00114
source arXiv.org
subjects Statistics - Applications
title Automatic Detection and Diagnosis of Biased Online Experiments
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T17%3A25%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%20Detection%20and%20Diagnosis%20of%20Biased%20Online%20Experiments&rft.au=Chen,%20Nanyu&rft.date=2018-07-31&rft_id=info:doi/10.48550/arxiv.1808.00114&rft_dat=%3Carxiv_GOX%3E1808_00114%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true