Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution
The invention discloses a retina vessel segmentation method based on a mixed attention mechanism and asymmetric convolution, which comprises the following steps: firstly, acquiring data, dividing a training set and a test set, carrying out image preprocessing on the training set and the test set, ca...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | CHEN YI HAN CHENGRUI CHEN LAIXIAN TIAN YIDUO REN XIANLIN CAO JIAJIA |
description | The invention discloses a retina vessel segmentation method based on a mixed attention mechanism and asymmetric convolution, which comprises the following steps: firstly, acquiring data, dividing a training set and a test set, carrying out image preprocessing on the training set and the test set, carrying out data enhancement on images of the training set, extracting patches from the training set and the test set, and carrying out image segmentation on the training set and the test set; and constructing a neural network model integrated with a mixed attention mechanism and asymmetric convolution, training the neural network model by using the training set, verifying a model effect by using the test set, inputting the fundus image to be segmented into the trained neural network model, and outputting a retinal vessel segmentation effect by a neural network. According to the method, the asymmetric convolution kernel is used for reducing a large number of training parameters, the calculation complexity is reduced |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN115731242A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN115731242A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN115731242A3</originalsourceid><addsrcrecordid>eNqNiz0OwjAMhbMwIOAO5gAMaUHMqAIxMSD2YlLTRkqcCpsKbk8qOADT-_ve1FzPpJ4xwEAiFECojcSK6hNDJO1SAzcUamDM_pUNqmbiu7sO2UsE5NzLO-bHwztwiYcUniM0N5M7BqHFT2dmedhfquOK-lST9OiISevqZO1mW9piXezKf5gPOO4-Og</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution</title><source>esp@cenet</source><creator>CHEN YI ; HAN CHENGRUI ; CHEN LAIXIAN ; TIAN YIDUO ; REN XIANLIN ; CAO JIAJIA</creator><creatorcontrib>CHEN YI ; HAN CHENGRUI ; CHEN LAIXIAN ; TIAN YIDUO ; REN XIANLIN ; CAO JIAJIA</creatorcontrib><description>The invention discloses a retina vessel segmentation method based on a mixed attention mechanism and asymmetric convolution, which comprises the following steps: firstly, acquiring data, dividing a training set and a test set, carrying out image preprocessing on the training set and the test set, carrying out data enhancement on images of the training set, extracting patches from the training set and the test set, and carrying out image segmentation on the training set and the test set; and constructing a neural network model integrated with a mixed attention mechanism and asymmetric convolution, training the neural network model by using the training set, verifying a model effect by using the test set, inputting the fundus image to be segmented into the trained neural network model, and outputting a retinal vessel segmentation effect by a neural network. According to the method, the asymmetric convolution kernel is used for reducing a large number of training parameters, the calculation complexity is reduced</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230303&DB=EPODOC&CC=CN&NR=115731242A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,778,883,25547,76298</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230303&DB=EPODOC&CC=CN&NR=115731242A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>CHEN YI</creatorcontrib><creatorcontrib>HAN CHENGRUI</creatorcontrib><creatorcontrib>CHEN LAIXIAN</creatorcontrib><creatorcontrib>TIAN YIDUO</creatorcontrib><creatorcontrib>REN XIANLIN</creatorcontrib><creatorcontrib>CAO JIAJIA</creatorcontrib><title>Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution</title><description>The invention discloses a retina vessel segmentation method based on a mixed attention mechanism and asymmetric convolution, which comprises the following steps: firstly, acquiring data, dividing a training set and a test set, carrying out image preprocessing on the training set and the test set, carrying out data enhancement on images of the training set, extracting patches from the training set and the test set, and carrying out image segmentation on the training set and the test set; and constructing a neural network model integrated with a mixed attention mechanism and asymmetric convolution, training the neural network model by using the training set, verifying a model effect by using the test set, inputting the fundus image to be segmented into the trained neural network model, and outputting a retinal vessel segmentation effect by a neural network. According to the method, the asymmetric convolution kernel is used for reducing a large number of training parameters, the calculation complexity is reduced</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNiz0OwjAMhbMwIOAO5gAMaUHMqAIxMSD2YlLTRkqcCpsKbk8qOADT-_ve1FzPpJ4xwEAiFECojcSK6hNDJO1SAzcUamDM_pUNqmbiu7sO2UsE5NzLO-bHwztwiYcUniM0N5M7BqHFT2dmedhfquOK-lST9OiISevqZO1mW9piXezKf5gPOO4-Og</recordid><startdate>20230303</startdate><enddate>20230303</enddate><creator>CHEN YI</creator><creator>HAN CHENGRUI</creator><creator>CHEN LAIXIAN</creator><creator>TIAN YIDUO</creator><creator>REN XIANLIN</creator><creator>CAO JIAJIA</creator><scope>EVB</scope></search><sort><creationdate>20230303</creationdate><title>Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution</title><author>CHEN YI ; HAN CHENGRUI ; CHEN LAIXIAN ; TIAN YIDUO ; REN XIANLIN ; CAO JIAJIA</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN115731242A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>CHEN YI</creatorcontrib><creatorcontrib>HAN CHENGRUI</creatorcontrib><creatorcontrib>CHEN LAIXIAN</creatorcontrib><creatorcontrib>TIAN YIDUO</creatorcontrib><creatorcontrib>REN XIANLIN</creatorcontrib><creatorcontrib>CAO JIAJIA</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>CHEN YI</au><au>HAN CHENGRUI</au><au>CHEN LAIXIAN</au><au>TIAN YIDUO</au><au>REN XIANLIN</au><au>CAO JIAJIA</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution</title><date>2023-03-03</date><risdate>2023</risdate><abstract>The invention discloses a retina vessel segmentation method based on a mixed attention mechanism and asymmetric convolution, which comprises the following steps: firstly, acquiring data, dividing a training set and a test set, carrying out image preprocessing on the training set and the test set, carrying out data enhancement on images of the training set, extracting patches from the training set and the test set, and carrying out image segmentation on the training set and the test set; and constructing a neural network model integrated with a mixed attention mechanism and asymmetric convolution, training the neural network model by using the training set, verifying a model effect by using the test set, inputting the fundus image to be segmented into the trained neural network model, and outputting a retinal vessel segmentation effect by a neural network. According to the method, the asymmetric convolution kernel is used for reducing a large number of training parameters, the calculation complexity is reduced</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN115731242A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING IMAGE DATA PROCESSING OR GENERATION, IN GENERAL PHYSICS |
title | Retinal vessel segmentation method based on mixed attention mechanism and asymmetric convolution |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T15%3A38%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=CHEN%20YI&rft.date=2023-03-03&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN115731242A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |