Grounded Situation Recognition with Transformers
Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image. Inspired by the remarkable success of Transformers in vision tasks, we propose a GSR model base...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-11 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cho, Junhyeong Yoon, Youngseok Lee, Hyeonjun Kwak, Suha |
description | Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image. Inspired by the remarkable success of Transformers in vision tasks, we propose a GSR model based on a Transformer encoder-decoder architecture. The attention mechanism of our model enables accurate verb classification by capturing high-level semantic feature of an image effectively, and allows the model to flexibly deal with the complicated and image-dependent relations between entities for improved noun classification and localization. Our model is the first Transformer architecture for GSR, and achieves the state of the art in every evaluation metric on the SWiG benchmark. Our code is available at https://github.com/jhcho99/gsrtr . |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2600525645</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2600525645</sourcerecordid><originalsourceid>FETCH-proquest_journals_26005256453</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwcC_KL81LSU1RCM4sKU0syczPUwhKTc5Pz8sEs8szSzIUQooS84rT8otyU4uKeRhY0xJzilN5oTQ3g7Kba4izh25BUX5haWpxSXxWfmlRHlAq3sjMwMDUyNTMxNSYOFUAsWkztg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2600525645</pqid></control><display><type>article</type><title>Grounded Situation Recognition with Transformers</title><source>Free E- Journals</source><creator>Cho, Junhyeong ; Yoon, Youngseok ; Lee, Hyeonjun ; Kwak, Suha</creator><creatorcontrib>Cho, Junhyeong ; Yoon, Youngseok ; Lee, Hyeonjun ; Kwak, Suha</creatorcontrib><description>Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image. Inspired by the remarkable success of Transformers in vision tasks, we propose a GSR model based on a Transformer encoder-decoder architecture. The attention mechanism of our model enables accurate verb classification by capturing high-level semantic feature of an image effectively, and allows the model to flexibly deal with the complicated and image-dependent relations between entities for improved noun classification and localization. Our model is the first Transformer architecture for GSR, and achieves the state of the art in every evaluation metric on the SWiG benchmark. Our code is available at https://github.com/jhcho99/gsrtr .</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Coders ; Encoders-Decoders ; Image classification ; Recognition ; Semantics ; State-of-the-art reviews</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Cho, Junhyeong</creatorcontrib><creatorcontrib>Yoon, Youngseok</creatorcontrib><creatorcontrib>Lee, Hyeonjun</creatorcontrib><creatorcontrib>Kwak, Suha</creatorcontrib><title>Grounded Situation Recognition with Transformers</title><title>arXiv.org</title><description>Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image. Inspired by the remarkable success of Transformers in vision tasks, we propose a GSR model based on a Transformer encoder-decoder architecture. The attention mechanism of our model enables accurate verb classification by capturing high-level semantic feature of an image effectively, and allows the model to flexibly deal with the complicated and image-dependent relations between entities for improved noun classification and localization. Our model is the first Transformer architecture for GSR, and achieves the state of the art in every evaluation metric on the SWiG benchmark. Our code is available at https://github.com/jhcho99/gsrtr .</description><subject>Coders</subject><subject>Encoders-Decoders</subject><subject>Image classification</subject><subject>Recognition</subject><subject>Semantics</subject><subject>State-of-the-art reviews</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwcC_KL81LSU1RCM4sKU0syczPUwhKTc5Pz8sEs8szSzIUQooS84rT8otyU4uKeRhY0xJzilN5oTQ3g7Kba4izh25BUX5haWpxSXxWfmlRHlAq3sjMwMDUyNTMxNSYOFUAsWkztg</recordid><startdate>20211119</startdate><enddate>20211119</enddate><creator>Cho, Junhyeong</creator><creator>Yoon, Youngseok</creator><creator>Lee, Hyeonjun</creator><creator>Kwak, Suha</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211119</creationdate><title>Grounded Situation Recognition with Transformers</title><author>Cho, Junhyeong ; Yoon, Youngseok ; Lee, Hyeonjun ; Kwak, Suha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26005256453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Coders</topic><topic>Encoders-Decoders</topic><topic>Image classification</topic><topic>Recognition</topic><topic>Semantics</topic><topic>State-of-the-art reviews</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Junhyeong</creatorcontrib><creatorcontrib>Yoon, Youngseok</creatorcontrib><creatorcontrib>Lee, Hyeonjun</creatorcontrib><creatorcontrib>Kwak, Suha</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cho, Junhyeong</au><au>Yoon, Youngseok</au><au>Lee, Hyeonjun</au><au>Kwak, Suha</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Grounded Situation Recognition with Transformers</atitle><jtitle>arXiv.org</jtitle><date>2021-11-19</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image. Inspired by the remarkable success of Transformers in vision tasks, we propose a GSR model based on a Transformer encoder-decoder architecture. The attention mechanism of our model enables accurate verb classification by capturing high-level semantic feature of an image effectively, and allows the model to flexibly deal with the complicated and image-dependent relations between entities for improved noun classification and localization. Our model is the first Transformer architecture for GSR, and achieves the state of the art in every evaluation metric on the SWiG benchmark. Our code is available at https://github.com/jhcho99/gsrtr .</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2600525645 |
source | Free E- Journals |
subjects | Coders Encoders-Decoders Image classification Recognition Semantics State-of-the-art reviews |
title | Grounded Situation Recognition with Transformers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T13%3A04%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Grounded%20Situation%20Recognition%20with%20Transformers&rft.jtitle=arXiv.org&rft.au=Cho,%20Junhyeong&rft.date=2021-11-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2600525645%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2600525645&rft_id=info:pmid/&rfr_iscdi=true |