Double-view eye fundus image fusion method based on deep learning

The invention provides a double-view eye fundus image fusion method based on deep learning, and the method is characterized in that the method comprises the following steps: S1, carrying out the preprocessing of two to-be-detected images, and obtaining two preprocessed images; S2, building a convolu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: FENG RUI, JIANG LULU, SHAO JINJIE, HOU JUNLIN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator FENG RUI
JIANG LULU
SHAO JINJIE
HOU JUNLIN
description The invention provides a double-view eye fundus image fusion method based on deep learning, and the method is characterized in that the method comprises the following steps: S1, carrying out the preprocessing of two to-be-detected images, and obtaining two preprocessed images; S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model called M-net; S3, dividing the M-net into two parts, namely an M-net Part I and an M-net Part II; S4, respectively putting the two preprocessed images into M-net Part I for feature extraction, and obtaining two image feature maps; S5, splicing the two image feature maps to obtain a spliced image; S6, putting the spliced image into M-net Part II for feature fusion. 本发明提供了一种基于深度学习的双视野眼底图像融合方法,具有这样的特征,包括以下步骤,步骤S1,对两张待测图像进行预处理获得两张预处理图像;步骤S2,搭建卷积神经网络模型,对卷积神经网络模型进行训练,从而得到训练后的卷积神经网络模型,称为M-net;步骤S3,将M-net分成两部分,称为M-net PartⅠ和M-net PartⅡ;步骤S4,将两张预处理图像分别放入M-net PartⅠ进行特征提取,获得两张图像特征图;步骤S5,将两张图
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN112869706A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN112869706A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN112869706A3</originalsourceid><addsrcrecordid>eNrjZHB0yS9NyknVLctMLVdIrUxVSCvNSyktVsjMTUwHcYoz8_MUclNLMvJTFJISi1NTFID8lNTUAoWc1MSivMy8dB4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pckJicmpdaEu_sZ2hoZGFmaW5g5mhMjBoA144xVQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Double-view eye fundus image fusion method based on deep learning</title><source>esp@cenet</source><creator>FENG RUI ; JIANG LULU ; SHAO JINJIE ; HOU JUNLIN</creator><creatorcontrib>FENG RUI ; JIANG LULU ; SHAO JINJIE ; HOU JUNLIN</creatorcontrib><description>The invention provides a double-view eye fundus image fusion method based on deep learning, and the method is characterized in that the method comprises the following steps: S1, carrying out the preprocessing of two to-be-detected images, and obtaining two preprocessed images; S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model called M-net; S3, dividing the M-net into two parts, namely an M-net Part I and an M-net Part II; S4, respectively putting the two preprocessed images into M-net Part I for feature extraction, and obtaining two image feature maps; S5, splicing the two image feature maps to obtain a spliced image; S6, putting the spliced image into M-net Part II for feature fusion. 本发明提供了一种基于深度学习的双视野眼底图像融合方法,具有这样的特征,包括以下步骤,步骤S1,对两张待测图像进行预处理获得两张预处理图像;步骤S2,搭建卷积神经网络模型,对卷积神经网络模型进行训练,从而得到训练后的卷积神经网络模型,称为M-net;步骤S3,将M-net分成两部分,称为M-net PartⅠ和M-net PartⅡ;步骤S4,将两张预处理图像分别放入M-net PartⅠ进行特征提取,获得两张图像特征图;步骤S5,将两张图</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; DIAGNOSIS ; HUMAN NECESSITIES ; HYGIENE ; IDENTIFICATION ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MEDICAL OR VETERINARY SCIENCE ; PHYSICS ; SURGERY</subject><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210601&amp;DB=EPODOC&amp;CC=CN&amp;NR=112869706A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20210601&amp;DB=EPODOC&amp;CC=CN&amp;NR=112869706A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>FENG RUI</creatorcontrib><creatorcontrib>JIANG LULU</creatorcontrib><creatorcontrib>SHAO JINJIE</creatorcontrib><creatorcontrib>HOU JUNLIN</creatorcontrib><title>Double-view eye fundus image fusion method based on deep learning</title><description>The invention provides a double-view eye fundus image fusion method based on deep learning, and the method is characterized in that the method comprises the following steps: S1, carrying out the preprocessing of two to-be-detected images, and obtaining two preprocessed images; S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model called M-net; S3, dividing the M-net into two parts, namely an M-net Part I and an M-net Part II; S4, respectively putting the two preprocessed images into M-net Part I for feature extraction, and obtaining two image feature maps; S5, splicing the two image feature maps to obtain a spliced image; S6, putting the spliced image into M-net Part II for feature fusion. 本发明提供了一种基于深度学习的双视野眼底图像融合方法,具有这样的特征,包括以下步骤,步骤S1,对两张待测图像进行预处理获得两张预处理图像;步骤S2,搭建卷积神经网络模型,对卷积神经网络模型进行训练,从而得到训练后的卷积神经网络模型,称为M-net;步骤S3,将M-net分成两部分,称为M-net PartⅠ和M-net PartⅡ;步骤S4,将两张预处理图像分别放入M-net PartⅠ进行特征提取,获得两张图像特征图;步骤S5,将两张图</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>DIAGNOSIS</subject><subject>HUMAN NECESSITIES</subject><subject>HYGIENE</subject><subject>IDENTIFICATION</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MEDICAL OR VETERINARY SCIENCE</subject><subject>PHYSICS</subject><subject>SURGERY</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHB0yS9NyknVLctMLVdIrUxVSCvNSyktVsjMTUwHcYoz8_MUclNLMvJTFJISi1NTFID8lNTUAoWc1MSivMy8dB4G1rTEnOJUXijNzaDo5hri7KGbWpAfn1pckJicmpdaEu_sZ2hoZGFmaW5g5mhMjBoA144xVQ</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>FENG RUI</creator><creator>JIANG LULU</creator><creator>SHAO JINJIE</creator><creator>HOU JUNLIN</creator><scope>EVB</scope></search><sort><creationdate>20210601</creationdate><title>Double-view eye fundus image fusion method based on deep learning</title><author>FENG RUI ; JIANG LULU ; SHAO JINJIE ; HOU JUNLIN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN112869706A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2021</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>DIAGNOSIS</topic><topic>HUMAN NECESSITIES</topic><topic>HYGIENE</topic><topic>IDENTIFICATION</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MEDICAL OR VETERINARY SCIENCE</topic><topic>PHYSICS</topic><topic>SURGERY</topic><toplevel>online_resources</toplevel><creatorcontrib>FENG RUI</creatorcontrib><creatorcontrib>JIANG LULU</creatorcontrib><creatorcontrib>SHAO JINJIE</creatorcontrib><creatorcontrib>HOU JUNLIN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>FENG RUI</au><au>JIANG LULU</au><au>SHAO JINJIE</au><au>HOU JUNLIN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Double-view eye fundus image fusion method based on deep learning</title><date>2021-06-01</date><risdate>2021</risdate><abstract>The invention provides a double-view eye fundus image fusion method based on deep learning, and the method is characterized in that the method comprises the following steps: S1, carrying out the preprocessing of two to-be-detected images, and obtaining two preprocessed images; S2, building a convolutional neural network model, and training the convolutional neural network model to obtain a trained convolutional neural network model called M-net; S3, dividing the M-net into two parts, namely an M-net Part I and an M-net Part II; S4, respectively putting the two preprocessed images into M-net Part I for feature extraction, and obtaining two image feature maps; S5, splicing the two image feature maps to obtain a spliced image; S6, putting the spliced image into M-net Part II for feature fusion. 本发明提供了一种基于深度学习的双视野眼底图像融合方法,具有这样的特征,包括以下步骤,步骤S1,对两张待测图像进行预处理获得两张预处理图像;步骤S2,搭建卷积神经网络模型,对卷积神经网络模型进行训练,从而得到训练后的卷积神经网络模型,称为M-net;步骤S3,将M-net分成两部分,称为M-net PartⅠ和M-net PartⅡ;步骤S4,将两张预处理图像分别放入M-net PartⅠ进行特征提取,获得两张图像特征图;步骤S5,将两张图</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN112869706A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
DIAGNOSIS
HUMAN NECESSITIES
HYGIENE
IDENTIFICATION
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MEDICAL OR VETERINARY SCIENCE
PHYSICS
SURGERY
title Double-view eye fundus image fusion method based on deep learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T08%3A56%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=FENG%20RUI&rft.date=2021-06-01&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN112869706A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true