InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language

We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the scree...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-06
Hauptverfasser: Liu, Zhaoyang, He, Yinan, Wang, Wenhai, Wang, Weiyun, Wang, Yi, Chen, Shoufa, Zhang, Qinglong, Lai, Zeqiang, Yang, Yang, Li, Qingyun, Yu, Jiashuo, Li, Kunchang, Chen, Zhe, Yang, Xue, Zhu, Xizhou, Wang, Yali, Wang, Limin, Luo, Ping, Dai, Jifeng, Yu, Qiao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Zhaoyang
He, Yinan
Wang, Wenhai
Wang, Weiyun
Wang, Yi
Chen, Shoufa
Zhang, Qinglong
Lai, Zeqiang
Yang, Yang
Li, Qingyun
Yu, Jiashuo
Li, Kunchang
Chen, Zhe
Yang, Xue
Zhu, Xizhou
Wang, Yali
Wang, Limin
Luo, Ping
Dai, Jifeng
Yu, Qiao
description We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (including gestures, cursors, etc.) movements can provide more flexibility and precision in performing vision-centric tasks that require fine-grained control, editing, and generation of visual content. The name InternGPT stands for \textbf{inter}action, \textbf{n}onverbal, and \textbf{chat}bots. Different from existing interactive systems that rely on pure language, by incorporating pointing instructions, the proposed iGPT significantly improves the efficiency of communication between users and chatbots, as well as the accuracy of chatbots in vision-centric tasks, especially in complicated visual scenarios where the number of objects is greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used to improve the control capability of LLM, and a large vision-language model termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89\% GPT-4 Quality). We hope this work can spark new ideas and directions for future interactive visual systems. Welcome to watch the code at https://github.com/OpenGVLab/InternGPT.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2812871733</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2812871733</sourcerecordid><originalsourceid>FETCH-proquest_journals_28128717333</originalsourceid><addsrcrecordid>eNqNy80KgkAYheEhCJLyHj5oLehMprRM-oMWgeJWJpt0TL6pmbHw7rPoAlqdxfucEXEoY4EXLyidENeYxvd9uoxoGDKHpAe0QuPulK0gVe1TYgW5NFKhlwi0WpaQcXMzcO7hS3lpP-YlbQ1Jze3whLXoFV7gyLHqeCVmZHzlrRHub6dkvt1kyd67a_XohLFFozqNQypoHNA4CiLG2H_qDcIQPzU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2812871733</pqid></control><display><type>article</type><title>InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language</title><source>Free E- Journals</source><creator>Liu, Zhaoyang ; He, Yinan ; Wang, Wenhai ; Wang, Weiyun ; Wang, Yi ; Chen, Shoufa ; Zhang, Qinglong ; Lai, Zeqiang ; Yang, Yang ; Li, Qingyun ; Yu, Jiashuo ; Li, Kunchang ; Chen, Zhe ; Yang, Xue ; Zhu, Xizhou ; Wang, Yali ; Wang, Limin ; Luo, Ping ; Dai, Jifeng ; Yu, Qiao</creator><creatorcontrib>Liu, Zhaoyang ; He, Yinan ; Wang, Wenhai ; Wang, Weiyun ; Wang, Yi ; Chen, Shoufa ; Zhang, Qinglong ; Lai, Zeqiang ; Yang, Yang ; Li, Qingyun ; Yu, Jiashuo ; Li, Kunchang ; Chen, Zhe ; Yang, Xue ; Zhu, Xizhou ; Wang, Yali ; Wang, Limin ; Luo, Ping ; Dai, Jifeng ; Yu, Qiao</creatorcontrib><description>We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (including gestures, cursors, etc.) movements can provide more flexibility and precision in performing vision-centric tasks that require fine-grained control, editing, and generation of visual content. The name InternGPT stands for \textbf{inter}action, \textbf{n}onverbal, and \textbf{chat}bots. Different from existing interactive systems that rely on pure language, by incorporating pointing instructions, the proposed iGPT significantly improves the efficiency of communication between users and chatbots, as well as the accuracy of chatbots in vision-centric tasks, especially in complicated visual scenarios where the number of objects is greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used to improve the control capability of LLM, and a large vision-language model termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89\% GPT-4 Quality). We hope this work can spark new ideas and directions for future interactive visual systems. Welcome to watch the code at https://github.com/OpenGVLab/InternGPT.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Chatbots ; Image manipulation ; Interactive systems ; Vision ; Visual tasks</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Liu, Zhaoyang</creatorcontrib><creatorcontrib>He, Yinan</creatorcontrib><creatorcontrib>Wang, Wenhai</creatorcontrib><creatorcontrib>Wang, Weiyun</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Chen, Shoufa</creatorcontrib><creatorcontrib>Zhang, Qinglong</creatorcontrib><creatorcontrib>Lai, Zeqiang</creatorcontrib><creatorcontrib>Yang, Yang</creatorcontrib><creatorcontrib>Li, Qingyun</creatorcontrib><creatorcontrib>Yu, Jiashuo</creatorcontrib><creatorcontrib>Li, Kunchang</creatorcontrib><creatorcontrib>Chen, Zhe</creatorcontrib><creatorcontrib>Yang, Xue</creatorcontrib><creatorcontrib>Zhu, Xizhou</creatorcontrib><creatorcontrib>Wang, Yali</creatorcontrib><creatorcontrib>Wang, Limin</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><creatorcontrib>Dai, Jifeng</creatorcontrib><creatorcontrib>Yu, Qiao</creatorcontrib><title>InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language</title><title>arXiv.org</title><description>We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (including gestures, cursors, etc.) movements can provide more flexibility and precision in performing vision-centric tasks that require fine-grained control, editing, and generation of visual content. The name InternGPT stands for \textbf{inter}action, \textbf{n}onverbal, and \textbf{chat}bots. Different from existing interactive systems that rely on pure language, by incorporating pointing instructions, the proposed iGPT significantly improves the efficiency of communication between users and chatbots, as well as the accuracy of chatbots in vision-centric tasks, especially in complicated visual scenarios where the number of objects is greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used to improve the control capability of LLM, and a large vision-language model termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89\% GPT-4 Quality). We hope this work can spark new ideas and directions for future interactive visual systems. Welcome to watch the code at https://github.com/OpenGVLab/InternGPT.</description><subject>Chatbots</subject><subject>Image manipulation</subject><subject>Interactive systems</subject><subject>Vision</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNy80KgkAYheEhCJLyHj5oLehMprRM-oMWgeJWJpt0TL6pmbHw7rPoAlqdxfucEXEoY4EXLyidENeYxvd9uoxoGDKHpAe0QuPulK0gVe1TYgW5NFKhlwi0WpaQcXMzcO7hS3lpP-YlbQ1Jze3whLXoFV7gyLHqeCVmZHzlrRHub6dkvt1kyd67a_XohLFFozqNQypoHNA4CiLG2H_qDcIQPzU</recordid><startdate>20230602</startdate><enddate>20230602</enddate><creator>Liu, Zhaoyang</creator><creator>He, Yinan</creator><creator>Wang, Wenhai</creator><creator>Wang, Weiyun</creator><creator>Wang, Yi</creator><creator>Chen, Shoufa</creator><creator>Zhang, Qinglong</creator><creator>Lai, Zeqiang</creator><creator>Yang, Yang</creator><creator>Li, Qingyun</creator><creator>Yu, Jiashuo</creator><creator>Li, Kunchang</creator><creator>Chen, Zhe</creator><creator>Yang, Xue</creator><creator>Zhu, Xizhou</creator><creator>Wang, Yali</creator><creator>Wang, Limin</creator><creator>Luo, Ping</creator><creator>Dai, Jifeng</creator><creator>Yu, Qiao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230602</creationdate><title>InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language</title><author>Liu, Zhaoyang ; He, Yinan ; Wang, Wenhai ; Wang, Weiyun ; Wang, Yi ; Chen, Shoufa ; Zhang, Qinglong ; Lai, Zeqiang ; Yang, Yang ; Li, Qingyun ; Yu, Jiashuo ; Li, Kunchang ; Chen, Zhe ; Yang, Xue ; Zhu, Xizhou ; Wang, Yali ; Wang, Limin ; Luo, Ping ; Dai, Jifeng ; Yu, Qiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28128717333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Chatbots</topic><topic>Image manipulation</topic><topic>Interactive systems</topic><topic>Vision</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Zhaoyang</creatorcontrib><creatorcontrib>He, Yinan</creatorcontrib><creatorcontrib>Wang, Wenhai</creatorcontrib><creatorcontrib>Wang, Weiyun</creatorcontrib><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Chen, Shoufa</creatorcontrib><creatorcontrib>Zhang, Qinglong</creatorcontrib><creatorcontrib>Lai, Zeqiang</creatorcontrib><creatorcontrib>Yang, Yang</creatorcontrib><creatorcontrib>Li, Qingyun</creatorcontrib><creatorcontrib>Yu, Jiashuo</creatorcontrib><creatorcontrib>Li, Kunchang</creatorcontrib><creatorcontrib>Chen, Zhe</creatorcontrib><creatorcontrib>Yang, Xue</creatorcontrib><creatorcontrib>Zhu, Xizhou</creatorcontrib><creatorcontrib>Wang, Yali</creatorcontrib><creatorcontrib>Wang, Limin</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><creatorcontrib>Dai, Jifeng</creatorcontrib><creatorcontrib>Yu, Qiao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Zhaoyang</au><au>He, Yinan</au><au>Wang, Wenhai</au><au>Wang, Weiyun</au><au>Wang, Yi</au><au>Chen, Shoufa</au><au>Zhang, Qinglong</au><au>Lai, Zeqiang</au><au>Yang, Yang</au><au>Li, Qingyun</au><au>Yu, Jiashuo</au><au>Li, Kunchang</au><au>Chen, Zhe</au><au>Yang, Xue</au><au>Zhu, Xizhou</au><au>Wang, Yali</au><au>Wang, Limin</au><au>Luo, Ping</au><au>Dai, Jifeng</au><au>Yu, Qiao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language</atitle><jtitle>arXiv.org</jtitle><date>2023-06-02</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (including gestures, cursors, etc.) movements can provide more flexibility and precision in performing vision-centric tasks that require fine-grained control, editing, and generation of visual content. The name InternGPT stands for \textbf{inter}action, \textbf{n}onverbal, and \textbf{chat}bots. Different from existing interactive systems that rely on pure language, by incorporating pointing instructions, the proposed iGPT significantly improves the efficiency of communication between users and chatbots, as well as the accuracy of chatbots in vision-centric tasks, especially in complicated visual scenarios where the number of objects is greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used to improve the control capability of LLM, and a large vision-language model termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89\% GPT-4 Quality). We hope this work can spark new ideas and directions for future interactive visual systems. Welcome to watch the code at https://github.com/OpenGVLab/InternGPT.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2812871733
source Free E- Journals
subjects Chatbots
Image manipulation
Interactive systems
Vision
Visual tasks
title InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T08%3A52%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=InternGPT:%20Solving%20Vision-Centric%20Tasks%20by%20Interacting%20with%20ChatGPT%20Beyond%20Language&rft.jtitle=arXiv.org&rft.au=Liu,%20Zhaoyang&rft.date=2023-06-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2812871733%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2812871733&rft_id=info:pmid/&rfr_iscdi=true