Morpho-Aware Global Attention for Image Matting

Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) face inherent challenges in image matting, particularly in preserving fine structural details. ViTs, with their global receptive field enabled by the self-attention mechanism, often lose local details such as hair strands. Conversel...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Jingru, Cao, Chengzhi, Xu, Chentianye, Xie, Zhongwei, Huang, Kaixiang, Zhou, Yang, He, Shengfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Jingru
Cao, Chengzhi
Xu, Chentianye
Xie, Zhongwei
Huang, Kaixiang
Zhou, Yang
He, Shengfeng
description Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) face inherent challenges in image matting, particularly in preserving fine structural details. ViTs, with their global receptive field enabled by the self-attention mechanism, often lose local details such as hair strands. Conversely, CNNs, constrained by their local receptive field, rely on deeper layers to approximate global context but struggle to retain fine structures at greater depths. To overcome these limitations, we propose a novel Morpho-Aware Global Attention (MAGA) mechanism, designed to effectively capture the morphology of fine structures. MAGA employs Tetris-like convolutional patterns to align the local shapes of fine structures, ensuring optimal local correspondence while maintaining sensitivity to morphological details. The extracted local morphology information is used as query embeddings, which are projected onto global key embeddings to emphasize local details in a broader context. Subsequently, by projecting onto value embeddings, MAGA seamlessly integrates these emphasized morphological details into a unified global structure. This approach enables MAGA to simultaneously focus on local morphology and unify these details into a coherent whole, effectively preserving fine structures. Extensive experiments show that our MAGA-based ViT achieves significant performance gains, outperforming state-of-the-art methods across two benchmarks with average improvements of 4.3% in SAD and 39.5% in MSE.
doi_str_mv 10.48550/arxiv.2411.10251
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_10251</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_10251</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_102513</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0MDI15GTQ980vKsjI13UsTyxKVXDPyU9KzFFwLClJzSvJzM9TSMsvUvDMTUxPVfBNLCnJzEvnYWBNS8wpTuWF0twM8m6uIc4eumCj4wuKMnMTiyrjQVbEg60wJqwCANgML80</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Morpho-Aware Global Attention for Image Matting</title><source>arXiv.org</source><creator>Yang, Jingru ; Cao, Chengzhi ; Xu, Chentianye ; Xie, Zhongwei ; Huang, Kaixiang ; Zhou, Yang ; He, Shengfeng</creator><creatorcontrib>Yang, Jingru ; Cao, Chengzhi ; Xu, Chentianye ; Xie, Zhongwei ; Huang, Kaixiang ; Zhou, Yang ; He, Shengfeng</creatorcontrib><description>Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) face inherent challenges in image matting, particularly in preserving fine structural details. ViTs, with their global receptive field enabled by the self-attention mechanism, often lose local details such as hair strands. Conversely, CNNs, constrained by their local receptive field, rely on deeper layers to approximate global context but struggle to retain fine structures at greater depths. To overcome these limitations, we propose a novel Morpho-Aware Global Attention (MAGA) mechanism, designed to effectively capture the morphology of fine structures. MAGA employs Tetris-like convolutional patterns to align the local shapes of fine structures, ensuring optimal local correspondence while maintaining sensitivity to morphological details. The extracted local morphology information is used as query embeddings, which are projected onto global key embeddings to emphasize local details in a broader context. Subsequently, by projecting onto value embeddings, MAGA seamlessly integrates these emphasized morphological details into a unified global structure. This approach enables MAGA to simultaneously focus on local morphology and unify these details into a coherent whole, effectively preserving fine structures. Extensive experiments show that our MAGA-based ViT achieves significant performance gains, outperforming state-of-the-art methods across two benchmarks with average improvements of 4.3% in SAD and 39.5% in MSE.</description><identifier>DOI: 10.48550/arxiv.2411.10251</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.10251$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.10251$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Jingru</creatorcontrib><creatorcontrib>Cao, Chengzhi</creatorcontrib><creatorcontrib>Xu, Chentianye</creatorcontrib><creatorcontrib>Xie, Zhongwei</creatorcontrib><creatorcontrib>Huang, Kaixiang</creatorcontrib><creatorcontrib>Zhou, Yang</creatorcontrib><creatorcontrib>He, Shengfeng</creatorcontrib><title>Morpho-Aware Global Attention for Image Matting</title><description>Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) face inherent challenges in image matting, particularly in preserving fine structural details. ViTs, with their global receptive field enabled by the self-attention mechanism, often lose local details such as hair strands. Conversely, CNNs, constrained by their local receptive field, rely on deeper layers to approximate global context but struggle to retain fine structures at greater depths. To overcome these limitations, we propose a novel Morpho-Aware Global Attention (MAGA) mechanism, designed to effectively capture the morphology of fine structures. MAGA employs Tetris-like convolutional patterns to align the local shapes of fine structures, ensuring optimal local correspondence while maintaining sensitivity to morphological details. The extracted local morphology information is used as query embeddings, which are projected onto global key embeddings to emphasize local details in a broader context. Subsequently, by projecting onto value embeddings, MAGA seamlessly integrates these emphasized morphological details into a unified global structure. This approach enables MAGA to simultaneously focus on local morphology and unify these details into a coherent whole, effectively preserving fine structures. Extensive experiments show that our MAGA-based ViT achieves significant performance gains, outperforming state-of-the-art methods across two benchmarks with average improvements of 4.3% in SAD and 39.5% in MSE.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0MDI15GTQ980vKsjI13UsTyxKVXDPyU9KzFFwLClJzSvJzM9TSMsvUvDMTUxPVfBNLCnJzEvnYWBNS8wpTuWF0twM8m6uIc4eumCj4wuKMnMTiyrjQVbEg60wJqwCANgML80</recordid><startdate>20241115</startdate><enddate>20241115</enddate><creator>Yang, Jingru</creator><creator>Cao, Chengzhi</creator><creator>Xu, Chentianye</creator><creator>Xie, Zhongwei</creator><creator>Huang, Kaixiang</creator><creator>Zhou, Yang</creator><creator>He, Shengfeng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241115</creationdate><title>Morpho-Aware Global Attention for Image Matting</title><author>Yang, Jingru ; Cao, Chengzhi ; Xu, Chentianye ; Xie, Zhongwei ; Huang, Kaixiang ; Zhou, Yang ; He, Shengfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_102513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Jingru</creatorcontrib><creatorcontrib>Cao, Chengzhi</creatorcontrib><creatorcontrib>Xu, Chentianye</creatorcontrib><creatorcontrib>Xie, Zhongwei</creatorcontrib><creatorcontrib>Huang, Kaixiang</creatorcontrib><creatorcontrib>Zhou, Yang</creatorcontrib><creatorcontrib>He, Shengfeng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Jingru</au><au>Cao, Chengzhi</au><au>Xu, Chentianye</au><au>Xie, Zhongwei</au><au>Huang, Kaixiang</au><au>Zhou, Yang</au><au>He, Shengfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Morpho-Aware Global Attention for Image Matting</atitle><date>2024-11-15</date><risdate>2024</risdate><abstract>Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) face inherent challenges in image matting, particularly in preserving fine structural details. ViTs, with their global receptive field enabled by the self-attention mechanism, often lose local details such as hair strands. Conversely, CNNs, constrained by their local receptive field, rely on deeper layers to approximate global context but struggle to retain fine structures at greater depths. To overcome these limitations, we propose a novel Morpho-Aware Global Attention (MAGA) mechanism, designed to effectively capture the morphology of fine structures. MAGA employs Tetris-like convolutional patterns to align the local shapes of fine structures, ensuring optimal local correspondence while maintaining sensitivity to morphological details. The extracted local morphology information is used as query embeddings, which are projected onto global key embeddings to emphasize local details in a broader context. Subsequently, by projecting onto value embeddings, MAGA seamlessly integrates these emphasized morphological details into a unified global structure. This approach enables MAGA to simultaneously focus on local morphology and unify these details into a coherent whole, effectively preserving fine structures. Extensive experiments show that our MAGA-based ViT achieves significant performance gains, outperforming state-of-the-art methods across two benchmarks with average improvements of 4.3% in SAD and 39.5% in MSE.</abstract><doi>10.48550/arxiv.2411.10251</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.10251
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_10251
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Morpho-Aware Global Attention for Image Matting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A42%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Morpho-Aware%20Global%20Attention%20for%20Image%20Matting&rft.au=Yang,%20Jingru&rft.date=2024-11-15&rft_id=info:doi/10.48550/arxiv.2411.10251&rft_dat=%3Carxiv_GOX%3E2411_10251%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true