Sequential Decision-Making for Inline Text Autocomplete

Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into accoun...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chitnis, Rohan, Yang, Shentao, Geramifard, Alborz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chitnis, Rohan
Yang, Shentao
Geramifard, Alborz
description Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into account the cognitive load imposed on the user by surfacing suggestions, such as the effort to switch contexts from typing to reading the suggestion, and the time to decide whether to accept the suggestion. In this paper, we study the problem of improving inline autocomplete suggestions in text entry systems via a sequential decision-making formulation, and use reinforcement learning to learn suggestion policies through repeated interactions with a target user over time. This formulation allows us to factor cognitive load into the objective of training an autocomplete model, through a reward function based on text entry speed. We acquired theoretical and experimental evidence that, under certain objectives, the sequential decision-making formulation of the autocomplete problem provides a better suggestion policy than myopic single-step reasoning. However, aligning these objectives with real users requires further exploration. In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.
doi_str_mv 10.48550/arxiv.2403.15502
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_15502</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_15502</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-a92da6b10cce378f425381c8a92119cb1aae028954eeb2955d84dae50b4896083</originalsourceid><addsrcrecordid>eNotj71uwjAUhb10QJQH6IRfIKl_E3uMQqFIIAayRzfODbIIDoSA4O2hlOnonOHT-Qj54ixWRmv2Df3NX2OhmIz5s4sRSbd4umAYPLR0hs6ffReiNex92NGm6-kytD4gLfA20OwydK47HFsc8JN8NNCecfLOMSnmP0X-G602i2WerSJIUhGBFTUkFWfOoUxNo4SWhjvz3Dm3ruIAyISxWiFWwmpdG1UDalYpYxNm5JhM_7Gv5-Wx9wfo7-WfQfkykA86QkAk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Sequential Decision-Making for Inline Text Autocomplete</title><source>arXiv.org</source><creator>Chitnis, Rohan ; Yang, Shentao ; Geramifard, Alborz</creator><creatorcontrib>Chitnis, Rohan ; Yang, Shentao ; Geramifard, Alborz</creatorcontrib><description>Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into account the cognitive load imposed on the user by surfacing suggestions, such as the effort to switch contexts from typing to reading the suggestion, and the time to decide whether to accept the suggestion. In this paper, we study the problem of improving inline autocomplete suggestions in text entry systems via a sequential decision-making formulation, and use reinforcement learning to learn suggestion policies through repeated interactions with a target user over time. This formulation allows us to factor cognitive load into the objective of training an autocomplete model, through a reward function based on text entry speed. We acquired theoretical and experimental evidence that, under certain objectives, the sequential decision-making formulation of the autocomplete problem provides a better suggestion policy than myopic single-step reasoning. However, aligning these objectives with real users requires further exploration. In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.</description><identifier>DOI: 10.48550/arxiv.2403.15502</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Human-Computer Interaction ; Computer Science - Learning</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.15502$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.15502$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chitnis, Rohan</creatorcontrib><creatorcontrib>Yang, Shentao</creatorcontrib><creatorcontrib>Geramifard, Alborz</creatorcontrib><title>Sequential Decision-Making for Inline Text Autocomplete</title><description>Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into account the cognitive load imposed on the user by surfacing suggestions, such as the effort to switch contexts from typing to reading the suggestion, and the time to decide whether to accept the suggestion. In this paper, we study the problem of improving inline autocomplete suggestions in text entry systems via a sequential decision-making formulation, and use reinforcement learning to learn suggestion policies through repeated interactions with a target user over time. This formulation allows us to factor cognitive load into the objective of training an autocomplete model, through a reward function based on text entry speed. We acquired theoretical and experimental evidence that, under certain objectives, the sequential decision-making formulation of the autocomplete problem provides a better suggestion policy than myopic single-step reasoning. However, aligning these objectives with real users requires further exploration. In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAUhb10QJQH6IRfIKl_E3uMQqFIIAayRzfODbIIDoSA4O2hlOnonOHT-Qj54ixWRmv2Df3NX2OhmIz5s4sRSbd4umAYPLR0hs6ffReiNex92NGm6-kytD4gLfA20OwydK47HFsc8JN8NNCecfLOMSnmP0X-G602i2WerSJIUhGBFTUkFWfOoUxNo4SWhjvz3Dm3ruIAyISxWiFWwmpdG1UDalYpYxNm5JhM_7Gv5-Wx9wfo7-WfQfkykA86QkAk</recordid><startdate>20240321</startdate><enddate>20240321</enddate><creator>Chitnis, Rohan</creator><creator>Yang, Shentao</creator><creator>Geramifard, Alborz</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240321</creationdate><title>Sequential Decision-Making for Inline Text Autocomplete</title><author>Chitnis, Rohan ; Yang, Shentao ; Geramifard, Alborz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-a92da6b10cce378f425381c8a92119cb1aae028954eeb2955d84dae50b4896083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chitnis, Rohan</creatorcontrib><creatorcontrib>Yang, Shentao</creatorcontrib><creatorcontrib>Geramifard, Alborz</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chitnis, Rohan</au><au>Yang, Shentao</au><au>Geramifard, Alborz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sequential Decision-Making for Inline Text Autocomplete</atitle><date>2024-03-21</date><risdate>2024</risdate><abstract>Autocomplete suggestions are fundamental to modern text entry systems, with applications in domains such as messaging and email composition. Typically, autocomplete suggestions are generated from a language model with a confidence threshold. However, this threshold does not directly take into account the cognitive load imposed on the user by surfacing suggestions, such as the effort to switch contexts from typing to reading the suggestion, and the time to decide whether to accept the suggestion. In this paper, we study the problem of improving inline autocomplete suggestions in text entry systems via a sequential decision-making formulation, and use reinforcement learning to learn suggestion policies through repeated interactions with a target user over time. This formulation allows us to factor cognitive load into the objective of training an autocomplete model, through a reward function based on text entry speed. We acquired theoretical and experimental evidence that, under certain objectives, the sequential decision-making formulation of the autocomplete problem provides a better suggestion policy than myopic single-step reasoning. However, aligning these objectives with real users requires further exploration. In particular, we hypothesize that the objectives under which sequential decision-making can improve autocomplete systems are not tailored solely to text entry speed, but more broadly to metrics such as user satisfaction and convenience.</abstract><doi>10.48550/arxiv.2403.15502</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.15502
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_15502
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Human-Computer Interaction
Computer Science - Learning
title Sequential Decision-Making for Inline Text Autocomplete
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T02%3A48%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sequential%20Decision-Making%20for%20Inline%20Text%20Autocomplete&rft.au=Chitnis,%20Rohan&rft.date=2024-03-21&rft_id=info:doi/10.48550/arxiv.2403.15502&rft_dat=%3Carxiv_GOX%3E2403_15502%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true