AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION
Embodiments described herein provide for training a machine learning model for automatic organ segmentation. A processor executes a machine learning model using an image to output at least one predicted organ label for a plurality of pixels of the image. Upon transmitting the at least one predicted...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Fartaria, Mario Haas, Benjamin M Fluckiger, Simon Genghi, Angelo Friman, Anri Maarita Maslowski, Alexander E |
description | Embodiments described herein provide for training a machine learning model for automatic organ segmentation. A processor executes a machine learning model using an image to output at least one predicted organ label for a plurality of pixels of the image. Upon transmitting the at least one predicted organ label to a correction computing device, the processor receives one or more image fragments identifying corrections to the at least one predicted organ label. Upon transmitting the one or more image fragments and the image to a plurality of reviewer computing devices, the processor receives a plurality of inputs indicating whether the one or more image fragments are correct. When a number of inputs indicating an image fragment of the image fragments is correct exceeds a threshold, the processor aggregates the image fragment into a training data set. The processor trains the machine learning model with the training data set. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2023100179A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2023100179A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2023100179A13</originalsourceid><addsrcrecordid>eNrjZLBwDA3x93UMcXXRUXD29_FxdPIPcgzxDHNVCAjyd3YNDlZw8w9ScPRU8PV3cfUBCbqEOod4-vvxMLCmJeYUp_JCaW4GZTfXEGcP3dSC_PjU4oLE5NS81JL40GAjAyNjQwMDQ3NLR0Nj4lQBAAjaKWQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION</title><source>esp@cenet</source><creator>Fartaria, Mario ; Haas, Benjamin M ; Fluckiger, Simon ; Genghi, Angelo ; Friman, Anri Maarita ; Maslowski, Alexander E</creator><creatorcontrib>Fartaria, Mario ; Haas, Benjamin M ; Fluckiger, Simon ; Genghi, Angelo ; Friman, Anri Maarita ; Maslowski, Alexander E</creatorcontrib><description>Embodiments described herein provide for training a machine learning model for automatic organ segmentation. A processor executes a machine learning model using an image to output at least one predicted organ label for a plurality of pixels of the image. Upon transmitting the at least one predicted organ label to a correction computing device, the processor receives one or more image fragments identifying corrections to the at least one predicted organ label. Upon transmitting the one or more image fragments and the image to a plurality of reviewer computing devices, the processor receives a plurality of inputs indicating whether the one or more image fragments are correct. When a number of inputs indicating an image fragment of the image fragments is correct exceeds a threshold, the processor aggregates the image fragment into a training data set. The processor trains the machine learning model with the training data set.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230330&DB=EPODOC&CC=US&NR=2023100179A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230330&DB=EPODOC&CC=US&NR=2023100179A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Fartaria, Mario</creatorcontrib><creatorcontrib>Haas, Benjamin M</creatorcontrib><creatorcontrib>Fluckiger, Simon</creatorcontrib><creatorcontrib>Genghi, Angelo</creatorcontrib><creatorcontrib>Friman, Anri Maarita</creatorcontrib><creatorcontrib>Maslowski, Alexander E</creatorcontrib><title>AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION</title><description>Embodiments described herein provide for training a machine learning model for automatic organ segmentation. A processor executes a machine learning model using an image to output at least one predicted organ label for a plurality of pixels of the image. Upon transmitting the at least one predicted organ label to a correction computing device, the processor receives one or more image fragments identifying corrections to the at least one predicted organ label. Upon transmitting the one or more image fragments and the image to a plurality of reviewer computing devices, the processor receives a plurality of inputs indicating whether the one or more image fragments are correct. When a number of inputs indicating an image fragment of the image fragments is correct exceeds a threshold, the processor aggregates the image fragment into a training data set. The processor trains the machine learning model with the training data set.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLBwDA3x93UMcXXRUXD29_FxdPIPcgzxDHNVCAjyd3YNDlZw8w9ScPRU8PV3cfUBCbqEOod4-vvxMLCmJeYUp_JCaW4GZTfXEGcP3dSC_PjU4oLE5NS81JL40GAjAyNjQwMDQ3NLR0Nj4lQBAAjaKWQ</recordid><startdate>20230330</startdate><enddate>20230330</enddate><creator>Fartaria, Mario</creator><creator>Haas, Benjamin M</creator><creator>Fluckiger, Simon</creator><creator>Genghi, Angelo</creator><creator>Friman, Anri Maarita</creator><creator>Maslowski, Alexander E</creator><scope>EVB</scope></search><sort><creationdate>20230330</creationdate><title>AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION</title><author>Fartaria, Mario ; Haas, Benjamin M ; Fluckiger, Simon ; Genghi, Angelo ; Friman, Anri Maarita ; Maslowski, Alexander E</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2023100179A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Fartaria, Mario</creatorcontrib><creatorcontrib>Haas, Benjamin M</creatorcontrib><creatorcontrib>Fluckiger, Simon</creatorcontrib><creatorcontrib>Genghi, Angelo</creatorcontrib><creatorcontrib>Friman, Anri Maarita</creatorcontrib><creatorcontrib>Maslowski, Alexander E</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fartaria, Mario</au><au>Haas, Benjamin M</au><au>Fluckiger, Simon</au><au>Genghi, Angelo</au><au>Friman, Anri Maarita</au><au>Maslowski, Alexander E</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION</title><date>2023-03-30</date><risdate>2023</risdate><abstract>Embodiments described herein provide for training a machine learning model for automatic organ segmentation. A processor executes a machine learning model using an image to output at least one predicted organ label for a plurality of pixels of the image. Upon transmitting the at least one predicted organ label to a correction computing device, the processor receives one or more image fragments identifying corrections to the at least one predicted organ label. Upon transmitting the one or more image fragments and the image to a plurality of reviewer computing devices, the processor receives a plurality of inputs indicating whether the one or more image fragments are correct. When a number of inputs indicating an image fragment of the image fragments is correct exceeds a threshold, the processor aggregates the image fragment into a training data set. The processor trains the machine learning model with the training data set.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US2023100179A1 |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING IMAGE DATA PROCESSING OR GENERATION, IN GENERAL PHYSICS |
title | AUTOMATED, COLLABORATIVE PROCESS FOR AI MODEL PRODUCTION |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T20%3A33%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Fartaria,%20Mario&rft.date=2023-03-30&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2023100179A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |