Low-light image enhancement method for extracting and fusing local and global features

The invention discloses a low-light image enhancement method for extracting and fusing local and global features, and relates to the technical field of image processing. According to the method, a built BrigtFormer network structure is utilized, cross convolution and a self-attention mechanism are o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YANG WENMING, JIANG LIJUN, WANG YONG, YUAN XINLIN, LI BO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YANG WENMING
JIANG LIJUN
WANG YONG
YUAN XINLIN
LI BO
description The invention discloses a low-light image enhancement method for extracting and fusing local and global features, and relates to the technical field of image processing. According to the method, a built BrigtFormer network structure is utilized, cross convolution and a self-attention mechanism are organically unified, two advantages of local extraction and global dependence are considered at the same time, features are fused from two dimensions of space and channel by utilizing a feature equalization fusion unit, and the method comprises the following steps. According to the method, the local and global features of the image are extracted and fused at the same time, a new low-illumination image enhancement network model is established, the model fully combines local details and global information learned by the convolution and self-attention module to effectively enhance the low-illumination image, and through a new local-global feature fusion module, the low-illumination image can be effectively enhanced. Th
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN114972134A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN114972134A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN114972134A3</originalsourceid><addsrcrecordid>eNqNik0KwjAUBrNxIeod4gG6iC2ISymKC3Elbsszfk0C-SnJK3p8q3gAVzMMMxe3c3pW3hnL0gUykIiWokZAZBnANj1kn7LEizNpdtFIilMay0d90uS_wfh0n7QH8ZhRlmLWky9Y_bgQ6-Ph2p4qDKlDGUgjgrv2olSz225U3ezrf543CXg5aw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Low-light image enhancement method for extracting and fusing local and global features</title><source>esp@cenet</source><creator>YANG WENMING ; JIANG LIJUN ; WANG YONG ; YUAN XINLIN ; LI BO</creator><creatorcontrib>YANG WENMING ; JIANG LIJUN ; WANG YONG ; YUAN XINLIN ; LI BO</creatorcontrib><description>The invention discloses a low-light image enhancement method for extracting and fusing local and global features, and relates to the technical field of image processing. According to the method, a built BrigtFormer network structure is utilized, cross convolution and a self-attention mechanism are organically unified, two advantages of local extraction and global dependence are considered at the same time, features are fused from two dimensions of space and channel by utilizing a feature equalization fusion unit, and the method comprises the following steps. According to the method, the local and global features of the image are extracted and fused at the same time, a new low-illumination image enhancement network model is established, the model fully combines local details and global information learned by the convolution and self-attention module to effectively enhance the low-illumination image, and through a new local-global feature fusion module, the low-illumination image can be effectively enhanced. Th</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220830&amp;DB=EPODOC&amp;CC=CN&amp;NR=114972134A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76418</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220830&amp;DB=EPODOC&amp;CC=CN&amp;NR=114972134A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YANG WENMING</creatorcontrib><creatorcontrib>JIANG LIJUN</creatorcontrib><creatorcontrib>WANG YONG</creatorcontrib><creatorcontrib>YUAN XINLIN</creatorcontrib><creatorcontrib>LI BO</creatorcontrib><title>Low-light image enhancement method for extracting and fusing local and global features</title><description>The invention discloses a low-light image enhancement method for extracting and fusing local and global features, and relates to the technical field of image processing. According to the method, a built BrigtFormer network structure is utilized, cross convolution and a self-attention mechanism are organically unified, two advantages of local extraction and global dependence are considered at the same time, features are fused from two dimensions of space and channel by utilizing a feature equalization fusion unit, and the method comprises the following steps. According to the method, the local and global features of the image are extracted and fused at the same time, a new low-illumination image enhancement network model is established, the model fully combines local details and global information learned by the convolution and self-attention module to effectively enhance the low-illumination image, and through a new local-global feature fusion module, the low-illumination image can be effectively enhanced. Th</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNik0KwjAUBrNxIeod4gG6iC2ISymKC3Elbsszfk0C-SnJK3p8q3gAVzMMMxe3c3pW3hnL0gUykIiWokZAZBnANj1kn7LEizNpdtFIilMay0d90uS_wfh0n7QH8ZhRlmLWky9Y_bgQ6-Ph2p4qDKlDGUgjgrv2olSz225U3ezrf543CXg5aw</recordid><startdate>20220830</startdate><enddate>20220830</enddate><creator>YANG WENMING</creator><creator>JIANG LIJUN</creator><creator>WANG YONG</creator><creator>YUAN XINLIN</creator><creator>LI BO</creator><scope>EVB</scope></search><sort><creationdate>20220830</creationdate><title>Low-light image enhancement method for extracting and fusing local and global features</title><author>YANG WENMING ; JIANG LIJUN ; WANG YONG ; YUAN XINLIN ; LI BO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN114972134A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>YANG WENMING</creatorcontrib><creatorcontrib>JIANG LIJUN</creatorcontrib><creatorcontrib>WANG YONG</creatorcontrib><creatorcontrib>YUAN XINLIN</creatorcontrib><creatorcontrib>LI BO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YANG WENMING</au><au>JIANG LIJUN</au><au>WANG YONG</au><au>YUAN XINLIN</au><au>LI BO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Low-light image enhancement method for extracting and fusing local and global features</title><date>2022-08-30</date><risdate>2022</risdate><abstract>The invention discloses a low-light image enhancement method for extracting and fusing local and global features, and relates to the technical field of image processing. According to the method, a built BrigtFormer network structure is utilized, cross convolution and a self-attention mechanism are organically unified, two advantages of local extraction and global dependence are considered at the same time, features are fused from two dimensions of space and channel by utilizing a feature equalization fusion unit, and the method comprises the following steps. According to the method, the local and global features of the image are extracted and fused at the same time, a new low-illumination image enhancement network model is established, the model fully combines local details and global information learned by the convolution and self-attention module to effectively enhance the low-illumination image, and through a new local-global feature fusion module, the low-illumination image can be effectively enhanced. Th</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN114972134A
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title Low-light image enhancement method for extracting and fusing local and global features
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A43%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YANG%20WENMING&rft.date=2022-08-30&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN114972134A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true