The Unsurprising Effectiveness of Pre-Trained Vision Models for Control

International Conference on Machine Learning (ICML), 2022, 162:17359-17371 Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Parisi, Simone, Rajeswaran, Aravind, Purushwalkam, Senthil, Gupta, Abhinav
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Parisi, Simone
Rajeswaran, Aravind
Purushwalkam, Senthil
Gupta, Abhinav
description International Conference on Machine Learning (ICML), 2022, 162:17359-17371 Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments. Source code and more at https://sites.google.com/view/pvr-control.
doi_str_mv 10.48550/arxiv.2203.03580
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_03580</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_03580</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-7127c86beb6c6bc0df8f079aa10fc0f0f8e512b69374b19019ac6eb4942f42f63</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwAn_QMI6Thz7iKJSkIrKIXCNbGe3jRRsZJcK_p5QkEaay-hpHmM3AspaNw3c2fQ1ncqqAlmCbDRcsk1_QP4a8mf6SFOewp6vidAfpxMGzJlH4i8Jiz7ZKeDI35ZNDPw5jjhnTjHxLoZjivMVuyA7Z7z-7xXrH9Z991hsd5un7n5bWNVC0Yqq9Vo5dMor52EkTdAaawWQBwLS2IjKKSPb2gkDwliv0NWmrmiJkit2-4c9mwzL53ebvodfo-FsJH8AYdBGYw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Unsurprising Effectiveness of Pre-Trained Vision Models for Control</title><source>arXiv.org</source><creator>Parisi, Simone ; Rajeswaran, Aravind ; Purushwalkam, Senthil ; Gupta, Abhinav</creator><creatorcontrib>Parisi, Simone ; Rajeswaran, Aravind ; Purushwalkam, Senthil ; Gupta, Abhinav</creatorcontrib><description>International Conference on Machine Learning (ICML), 2022, 162:17359-17371 Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments. Source code and more at https://sites.google.com/view/pvr-control.</description><identifier>DOI: 10.48550/arxiv.2203.03580</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2022-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.03580$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.03580$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Parisi, Simone</creatorcontrib><creatorcontrib>Rajeswaran, Aravind</creatorcontrib><creatorcontrib>Purushwalkam, Senthil</creatorcontrib><creatorcontrib>Gupta, Abhinav</creatorcontrib><title>The Unsurprising Effectiveness of Pre-Trained Vision Models for Control</title><description>International Conference on Machine Learning (ICML), 2022, 162:17359-17371 Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments. Source code and more at https://sites.google.com/view/pvr-control.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwAn_QMI6Thz7iKJSkIrKIXCNbGe3jRRsZJcK_p5QkEaay-hpHmM3AspaNw3c2fQ1ncqqAlmCbDRcsk1_QP4a8mf6SFOewp6vidAfpxMGzJlH4i8Jiz7ZKeDI35ZNDPw5jjhnTjHxLoZjivMVuyA7Z7z-7xXrH9Z991hsd5un7n5bWNVC0Yqq9Vo5dMor52EkTdAaawWQBwLS2IjKKSPb2gkDwliv0NWmrmiJkit2-4c9mwzL53ebvodfo-FsJH8AYdBGYw</recordid><startdate>20220307</startdate><enddate>20220307</enddate><creator>Parisi, Simone</creator><creator>Rajeswaran, Aravind</creator><creator>Purushwalkam, Senthil</creator><creator>Gupta, Abhinav</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220307</creationdate><title>The Unsurprising Effectiveness of Pre-Trained Vision Models for Control</title><author>Parisi, Simone ; Rajeswaran, Aravind ; Purushwalkam, Senthil ; Gupta, Abhinav</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-7127c86beb6c6bc0df8f079aa10fc0f0f8e512b69374b19019ac6eb4942f42f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Parisi, Simone</creatorcontrib><creatorcontrib>Rajeswaran, Aravind</creatorcontrib><creatorcontrib>Purushwalkam, Senthil</creatorcontrib><creatorcontrib>Gupta, Abhinav</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Parisi, Simone</au><au>Rajeswaran, Aravind</au><au>Purushwalkam, Senthil</au><au>Gupta, Abhinav</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Unsurprising Effectiveness of Pre-Trained Vision Models for Control</atitle><date>2022-03-07</date><risdate>2022</risdate><abstract>International Conference on Machine Learning (ICML), 2022, 162:17359-17371 Recent years have seen the emergence of pre-trained representations as a powerful abstraction for AI applications in computer vision, natural language, and speech. However, policy learning for control is still dominated by a tabula-rasa learning paradigm, with visuo-motor policies often trained from scratch using data from deployment environments. In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets. Through extensive empirical evaluation in diverse control domains (Habitat, DeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance of different representation training methods, data augmentations, and feature hierarchies. Overall, we find that pre-trained visual representations can be competitive or even better than ground-truth state representations to train control policies. This is in spite of using only out-of-domain data from standard vision datasets, without any in-domain data from the deployment environments. Source code and more at https://sites.google.com/view/pvr-control.</abstract><doi>10.48550/arxiv.2203.03580</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2203.03580
ispartof
issn
language eng
recordid cdi_arxiv_primary_2203_03580
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Computer Science - Robotics
title The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T02%3A38%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Unsurprising%20Effectiveness%20of%20Pre-Trained%20Vision%20Models%20for%20Control&rft.au=Parisi,%20Simone&rft.date=2022-03-07&rft_id=info:doi/10.48550/arxiv.2203.03580&rft_dat=%3Carxiv_GOX%3E2203_03580%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true