SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK
There is provided a method comprising: providing two anatomical images 104 of a target individual, each captured at a unique orientation of the target individual, inputting first and second anatomical images respectively into a first and second convolutional neural network (CNN) of a classifier to r...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Patent |
Sprache: | eng ; fre ; ger |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | LASERSON, Jonathan GOZ, Eli BRESTEL, Chen |
description | There is provided a method comprising: providing two anatomical images 104 of a target individual, each captured at a unique orientation of the target individual, inputting first and second anatomical images respectively into a first and second convolutional neural network (CNN) of a classifier to respectively output first and second feature vectors, inputting a concatenation of the first and second feature vectors into a fully connected layer of the classifier 110, and computing an indication of distinct visual finding(s) 112 present in the anatomical images by the fully connected layer, wherein the statistical classifier is trained on a training dataset including two anatomical images of each respective sample individual, each image captured at a respective unique orientation of the target individual, and a tag created based on an analysis that maps respective individual sentences of a text based radiology report to one of multiple indications of visual findings. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP3791310A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP3791310A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP3791310A13</originalsourceid><addsrcrecordid>eNqNy0EKwjAQheFuXIh6h7lAwZKFuBzSSRtsMiUZEVelSARBtFDvjxE9gKu3-L-3LG7xHIVcBPQ1OJKW6wiGA_SByj6wphitb3JGYWc1dmAdNvRFhqj-VOuFAUF3mLXJSix78HQM2XuSE4fDulhcx_ucNr9dFfkuui3T9BzSPI2X9EivgXq121eq2mKl_iBv8mw1fw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK</title><source>esp@cenet</source><creator>LASERSON, Jonathan ; GOZ, Eli ; BRESTEL, Chen</creator><creatorcontrib>LASERSON, Jonathan ; GOZ, Eli ; BRESTEL, Chen</creatorcontrib><description>There is provided a method comprising: providing two anatomical images 104 of a target individual, each captured at a unique orientation of the target individual, inputting first and second anatomical images respectively into a first and second convolutional neural network (CNN) of a classifier to respectively output first and second feature vectors, inputting a concatenation of the first and second feature vectors into a fully connected layer of the classifier 110, and computing an indication of distinct visual finding(s) 112 present in the anatomical images by the fully connected layer, wherein the statistical classifier is trained on a training dataset including two anatomical images of each respective sample individual, each image captured at a respective unique orientation of the target individual, and a tag created based on an analysis that maps respective individual sentences of a text based radiology report to one of multiple indications of visual findings.</description><language>eng ; fre ; ger</language><creationdate>2021</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20210317&DB=EPODOC&CC=EP&NR=3791310A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76294</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20210317&DB=EPODOC&CC=EP&NR=3791310A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>LASERSON, Jonathan</creatorcontrib><creatorcontrib>GOZ, Eli</creatorcontrib><creatorcontrib>BRESTEL, Chen</creatorcontrib><title>SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK</title><description>There is provided a method comprising: providing two anatomical images 104 of a target individual, each captured at a unique orientation of the target individual, inputting first and second anatomical images respectively into a first and second convolutional neural network (CNN) of a classifier to respectively output first and second feature vectors, inputting a concatenation of the first and second feature vectors into a fully connected layer of the classifier 110, and computing an indication of distinct visual finding(s) 112 present in the anatomical images by the fully connected layer, wherein the statistical classifier is trained on a training dataset including two anatomical images of each respective sample individual, each image captured at a respective unique orientation of the target individual, and a tag created based on an analysis that maps respective individual sentences of a text based radiology report to one of multiple indications of visual findings.</description><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2021</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNy0EKwjAQheFuXIh6h7lAwZKFuBzSSRtsMiUZEVelSARBtFDvjxE9gKu3-L-3LG7xHIVcBPQ1OJKW6wiGA_SByj6wphitb3JGYWc1dmAdNvRFhqj-VOuFAUF3mLXJSix78HQM2XuSE4fDulhcx_ucNr9dFfkuui3T9BzSPI2X9EivgXq121eq2mKl_iBv8mw1fw</recordid><startdate>20210317</startdate><enddate>20210317</enddate><creator>LASERSON, Jonathan</creator><creator>GOZ, Eli</creator><creator>BRESTEL, Chen</creator><scope>EVB</scope></search><sort><creationdate>20210317</creationdate><title>SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK</title><author>LASERSON, Jonathan ; GOZ, Eli ; BRESTEL, Chen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP3791310A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2021</creationdate><toplevel>online_resources</toplevel><creatorcontrib>LASERSON, Jonathan</creatorcontrib><creatorcontrib>GOZ, Eli</creatorcontrib><creatorcontrib>BRESTEL, Chen</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>LASERSON, Jonathan</au><au>GOZ, Eli</au><au>BRESTEL, Chen</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK</title><date>2021-03-17</date><risdate>2021</risdate><abstract>There is provided a method comprising: providing two anatomical images 104 of a target individual, each captured at a unique orientation of the target individual, inputting first and second anatomical images respectively into a first and second convolutional neural network (CNN) of a classifier to respectively output first and second feature vectors, inputting a concatenation of the first and second feature vectors into a fully connected layer of the classifier 110, and computing an indication of distinct visual finding(s) 112 present in the anatomical images by the fully connected layer, wherein the statistical classifier is trained on a training dataset including two anatomical images of each respective sample individual, each image captured at a respective unique orientation of the target individual, and a tag created based on an analysis that maps respective individual sentences of a text based radiology report to one of multiple indications of visual findings.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng ; fre ; ger |
recordid | cdi_epo_espacenet_EP3791310A1 |
source | esp@cenet |
title | SYSTEMS AND METHODS FOR PRE-PROCESSING ANATOMICAL IMAGES FOR FEEDING INTO A CLASSIFICATION NEURAL NETWORK |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T15%3A49%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=LASERSON,%20Jonathan&rft.date=2021-03-17&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP3791310A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |