Computer system and computerized method for partitioning data for parallel processing

A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: BECKERLE, MICHAEL J, PASSERA, ANTHONY, THORP, JOHN R, ZYSZKOWSKI, EDWARD S. A
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator BECKERLE
MICHAEL J
PASSERA
ANTHONY
THORP
JOHN R
ZYSZKOWSKI
EDWARD S. A
description A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US5909681A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US5909681A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US5909681A3</originalsourceid><addsrcrecordid>eNqFijEKAjEQANNYiPoG9wOCIopXyqHY69XHkuxpIMku2bXQ12tx1lYDMzN1XctZnkYV9KVGGbAE8KOLbwqQyR4cYOAKgtWiRS6x3CGg4c9iSpRAKntS_ca5mwyYlBYjZ255Pt3ay4qEe1JBT4Ws7667Zt3sD5vj9v_xAXNNOX8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Computer system and computerized method for partitioning data for parallel processing</title><source>esp@cenet</source><creator>BECKERLE; MICHAEL J ; PASSERA; ANTHONY ; THORP; JOHN R ; ZYSZKOWSKI; EDWARD S. A</creator><creatorcontrib>BECKERLE; MICHAEL J ; PASSERA; ANTHONY ; THORP; JOHN R ; ZYSZKOWSKI; EDWARD S. A</creatorcontrib><description>A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.</description><edition>6</edition><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; HANDLING RECORD CARRIERS ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS</subject><creationdate>1999</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=19990601&amp;DB=EPODOC&amp;CC=US&amp;NR=5909681A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=19990601&amp;DB=EPODOC&amp;CC=US&amp;NR=5909681A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>BECKERLE; MICHAEL J</creatorcontrib><creatorcontrib>PASSERA; ANTHONY</creatorcontrib><creatorcontrib>THORP; JOHN R</creatorcontrib><creatorcontrib>ZYSZKOWSKI; EDWARD S. A</creatorcontrib><title>Computer system and computerized method for partitioning data for parallel processing</title><description>A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>HANDLING RECORD CARRIERS</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>1999</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqFijEKAjEQANNYiPoG9wOCIopXyqHY69XHkuxpIMku2bXQ12tx1lYDMzN1XctZnkYV9KVGGbAE8KOLbwqQyR4cYOAKgtWiRS6x3CGg4c9iSpRAKntS_ca5mwyYlBYjZ255Pt3ay4qEe1JBT4Ws7667Zt3sD5vj9v_xAXNNOX8</recordid><startdate>19990601</startdate><enddate>19990601</enddate><creator>BECKERLE; MICHAEL J</creator><creator>PASSERA; ANTHONY</creator><creator>THORP; JOHN R</creator><creator>ZYSZKOWSKI; EDWARD S. A</creator><scope>EVB</scope></search><sort><creationdate>19990601</creationdate><title>Computer system and computerized method for partitioning data for parallel processing</title><author>BECKERLE; MICHAEL J ; PASSERA; ANTHONY ; THORP; JOHN R ; ZYSZKOWSKI; EDWARD S. A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US5909681A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>1999</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>HANDLING RECORD CARRIERS</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><toplevel>online_resources</toplevel><creatorcontrib>BECKERLE; MICHAEL J</creatorcontrib><creatorcontrib>PASSERA; ANTHONY</creatorcontrib><creatorcontrib>THORP; JOHN R</creatorcontrib><creatorcontrib>ZYSZKOWSKI; EDWARD S. A</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>BECKERLE; MICHAEL J</au><au>PASSERA; ANTHONY</au><au>THORP; JOHN R</au><au>ZYSZKOWSKI; EDWARD S. A</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Computer system and computerized method for partitioning data for parallel processing</title><date>1999-06-01</date><risdate>1999</risdate><abstract>A computer system splits a data space to partition data between processors or processes. The data space may be split into sub-regions which need not be orthogonal to the axes defined by the data space's parameters, using a decision tree. The decision tree can have neural networks in each of its non-terminal nodes that are trained on, and are used to partition, training data. Each terminal, or leaf, node can have a hidden layer neural network trained on the training data that reaches the terminal node. The training of the non-terminal nodes' neural networks can be performed on one processor and the training of the leaf nodes' neural networks can be run on separate processors. Different target values can be used for the training of the networks of different non-terminal nodes. The non-terminal node networks may be hidden layer neural networks. Each non-terminal node automatically may send a desired ratio of the training records it receives to each of its child nodes, so the leaf node networks each receives approximately the same number of training records. The system may automatically configures the tree to have a number of leaf nodes equal to the number of separate processors available to train leaf node networks. After the non-terminal and leaf node networks have been trained, the records of a large data base can be passed through the tree for classification or for estimation of certain parameter values.</abstract><edition>6</edition><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US5909681A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
HANDLING RECORD CARRIERS
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
title Computer system and computerized method for partitioning data for parallel processing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A36%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=BECKERLE;%20MICHAEL%20J&rft.date=1999-06-01&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS5909681A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true