A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis
Fischer‐Tropsch synthesis (FTS) is an important chemical process that produces a wide range of hydrocarbons. The exact mechanism of FTS is not yet fully understood, so prediction of the FTS products distribution is a not a trivial task. So far, artificial neural network (ANN) has been successfully a...
Gespeichert in:
Veröffentlicht in: | Journal of chemometrics 2018-03, Vol.32 (3), p.n/a |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | n/a |
---|---|
container_issue | 3 |
container_start_page | |
container_title | Journal of chemometrics |
container_volume | 32 |
creator | Dehghanian, Effat Gheshlaghi, Saman Zare |
description | Fischer‐Tropsch synthesis (FTS) is an important chemical process that produces a wide range of hydrocarbons. The exact mechanism of FTS is not yet fully understood, so prediction of the FTS products distribution is a not a trivial task. So far, artificial neural network (ANN) has been successfully applied for modeling varieties of chemical processes whenever sufficient and well‐distributed training patterns are available. However, for most chemical processes such as FTS, acquiring such amount of data is very time‐consuming and expensive. In such cases, neural network ensemble (NNE) has shown a significant generalization ability. An NNE is a set of diverse and accurate ANNs trained for the same task, and its output is a combination of outputs of these ANNs. This paper proposes a new NNE approach called NNE‐NSGA‐II that tries to prune this set by a modified nondominated sorting genetic algorithm to achieve an optimum subset according to 2 conflicting objectives, which are minimizing root‐mean‐square error in training and unseen data sets. Finally, a comparative study is performed on a single best ANN, a regular NNE, NNE‐NSGA, and 3 popular ensemble of decision trees called random forest, stochastic gradient boosting, and AdaBoost.R2. The results show that in training data set, stochastic gradient boosting and AdaBoost.R2 have better fitted the samples; however, for the predicted FTS products in unseen data set, NNEs methods specially NNE‐NSGA‐II have considerably improved the generalization ability in comparison with the other competing approaches.
This paper proposes a novel multiobjective approach based on a modified nondominated sorting genetic algorithm for building a neural network ensemble as a predictive model of Fischer‐Tropsch synthesis. Although a comparative study exhibits that some popular ensemble of decision trees such as stochastic gradient boosting and AdaBoost.R2 better fit the samples of training data set than the proposed approach but it has considerably better generalization ability than the compared methods for predicting the samples of unseen data set. |
doi_str_mv | 10.1002/cem.2969 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2012831338</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2012831338</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2939-ee7d82f981908be3a33409e1cc3a03cac00774345e895bb28acb75dff341bd8f3</originalsourceid><addsrcrecordid>eNp1kN9KwzAUh4MoOKfgIwS88aYzabo1uRxjU2HizQQvhJCmJy6jbWrSKrvzEXxGn8TMeuvVOfD7OH8-hC4pmVBC0hsN9SQVM3GERpQIkdCUPx-jEeF8lgjG2Sk6C2FHSMxYNkIvc1z3VWddsQPd2XfAqm29U3qLbYO1a0Ln-xg0r1jh1kNpB6p2JVTYOI9XNugt-O_Pr413bexx2DfdFoIN5-jEqCrAxV8do6fVcrO4S9aPt_eL-TrRqWAiAchLnhrBqSC8AKYYy4gAqjVThGmlCcnzjGVT4GJaFClXusinpTEso0XJDRujq2FuvPyth9DJnet9E1fKlEQBjLL4-RhdD5T2LgQPRrbe1srvJSXy4E5Gd_LgLqLJgH7YCvb_cnKxfPjlfwADyHI9</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2012831338</pqid></control><display><type>article</type><title>A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Dehghanian, Effat ; Gheshlaghi, Saman Zare</creator><creatorcontrib>Dehghanian, Effat ; Gheshlaghi, Saman Zare</creatorcontrib><description>Fischer‐Tropsch synthesis (FTS) is an important chemical process that produces a wide range of hydrocarbons. The exact mechanism of FTS is not yet fully understood, so prediction of the FTS products distribution is a not a trivial task. So far, artificial neural network (ANN) has been successfully applied for modeling varieties of chemical processes whenever sufficient and well‐distributed training patterns are available. However, for most chemical processes such as FTS, acquiring such amount of data is very time‐consuming and expensive. In such cases, neural network ensemble (NNE) has shown a significant generalization ability. An NNE is a set of diverse and accurate ANNs trained for the same task, and its output is a combination of outputs of these ANNs. This paper proposes a new NNE approach called NNE‐NSGA‐II that tries to prune this set by a modified nondominated sorting genetic algorithm to achieve an optimum subset according to 2 conflicting objectives, which are minimizing root‐mean‐square error in training and unseen data sets. Finally, a comparative study is performed on a single best ANN, a regular NNE, NNE‐NSGA, and 3 popular ensemble of decision trees called random forest, stochastic gradient boosting, and AdaBoost.R2. The results show that in training data set, stochastic gradient boosting and AdaBoost.R2 have better fitted the samples; however, for the predicted FTS products in unseen data set, NNEs methods specially NNE‐NSGA‐II have considerably improved the generalization ability in comparison with the other competing approaches.
This paper proposes a novel multiobjective approach based on a modified nondominated sorting genetic algorithm for building a neural network ensemble as a predictive model of Fischer‐Tropsch synthesis. Although a comparative study exhibits that some popular ensemble of decision trees such as stochastic gradient boosting and AdaBoost.R2 better fit the samples of training data set than the proposed approach but it has considerably better generalization ability than the compared methods for predicting the samples of unseen data set.</description><identifier>ISSN: 0886-9383</identifier><identifier>EISSN: 1099-128X</identifier><identifier>DOI: 10.1002/cem.2969</identifier><language>eng</language><publisher>Chichester: Wiley Subscription Services, Inc</publisher><subject>AdaBoost.R2 ; Artificial neural networks ; Chemical synthesis ; Classification ; Data acquisition ; Decision trees ; Fischer-Tropsch process ; Fischer‐Tropsch synthesis ; Genetic algorithms ; Hydrocarbons ; Machine learning ; Mathematical models ; multiobjective optimization ; Multiple objective analysis ; neural network ensemble ; Neural networks ; Pareto optimality ; Prediction models ; random forest ; Sorting algorithms ; stochastic gradient boosting ; Training</subject><ispartof>Journal of chemometrics, 2018-03, Vol.32 (3), p.n/a</ispartof><rights>Copyright © 2017 John Wiley & Sons, Ltd.</rights><rights>Copyright © 2018 John Wiley & Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2939-ee7d82f981908be3a33409e1cc3a03cac00774345e895bb28acb75dff341bd8f3</citedby><cites>FETCH-LOGICAL-c2939-ee7d82f981908be3a33409e1cc3a03cac00774345e895bb28acb75dff341bd8f3</cites><orcidid>0000-0002-3717-0068</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcem.2969$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcem.2969$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Dehghanian, Effat</creatorcontrib><creatorcontrib>Gheshlaghi, Saman Zare</creatorcontrib><title>A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis</title><title>Journal of chemometrics</title><description>Fischer‐Tropsch synthesis (FTS) is an important chemical process that produces a wide range of hydrocarbons. The exact mechanism of FTS is not yet fully understood, so prediction of the FTS products distribution is a not a trivial task. So far, artificial neural network (ANN) has been successfully applied for modeling varieties of chemical processes whenever sufficient and well‐distributed training patterns are available. However, for most chemical processes such as FTS, acquiring such amount of data is very time‐consuming and expensive. In such cases, neural network ensemble (NNE) has shown a significant generalization ability. An NNE is a set of diverse and accurate ANNs trained for the same task, and its output is a combination of outputs of these ANNs. This paper proposes a new NNE approach called NNE‐NSGA‐II that tries to prune this set by a modified nondominated sorting genetic algorithm to achieve an optimum subset according to 2 conflicting objectives, which are minimizing root‐mean‐square error in training and unseen data sets. Finally, a comparative study is performed on a single best ANN, a regular NNE, NNE‐NSGA, and 3 popular ensemble of decision trees called random forest, stochastic gradient boosting, and AdaBoost.R2. The results show that in training data set, stochastic gradient boosting and AdaBoost.R2 have better fitted the samples; however, for the predicted FTS products in unseen data set, NNEs methods specially NNE‐NSGA‐II have considerably improved the generalization ability in comparison with the other competing approaches.
This paper proposes a novel multiobjective approach based on a modified nondominated sorting genetic algorithm for building a neural network ensemble as a predictive model of Fischer‐Tropsch synthesis. Although a comparative study exhibits that some popular ensemble of decision trees such as stochastic gradient boosting and AdaBoost.R2 better fit the samples of training data set than the proposed approach but it has considerably better generalization ability than the compared methods for predicting the samples of unseen data set.</description><subject>AdaBoost.R2</subject><subject>Artificial neural networks</subject><subject>Chemical synthesis</subject><subject>Classification</subject><subject>Data acquisition</subject><subject>Decision trees</subject><subject>Fischer-Tropsch process</subject><subject>Fischer‐Tropsch synthesis</subject><subject>Genetic algorithms</subject><subject>Hydrocarbons</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>multiobjective optimization</subject><subject>Multiple objective analysis</subject><subject>neural network ensemble</subject><subject>Neural networks</subject><subject>Pareto optimality</subject><subject>Prediction models</subject><subject>random forest</subject><subject>Sorting algorithms</subject><subject>stochastic gradient boosting</subject><subject>Training</subject><issn>0886-9383</issn><issn>1099-128X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp1kN9KwzAUh4MoOKfgIwS88aYzabo1uRxjU2HizQQvhJCmJy6jbWrSKrvzEXxGn8TMeuvVOfD7OH8-hC4pmVBC0hsN9SQVM3GERpQIkdCUPx-jEeF8lgjG2Sk6C2FHSMxYNkIvc1z3VWddsQPd2XfAqm29U3qLbYO1a0Ln-xg0r1jh1kNpB6p2JVTYOI9XNugt-O_Pr413bexx2DfdFoIN5-jEqCrAxV8do6fVcrO4S9aPt_eL-TrRqWAiAchLnhrBqSC8AKYYy4gAqjVThGmlCcnzjGVT4GJaFClXusinpTEso0XJDRujq2FuvPyth9DJnet9E1fKlEQBjLL4-RhdD5T2LgQPRrbe1srvJSXy4E5Gd_LgLqLJgH7YCvb_cnKxfPjlfwADyHI9</recordid><startdate>201803</startdate><enddate>201803</enddate><creator>Dehghanian, Effat</creator><creator>Gheshlaghi, Saman Zare</creator><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7U5</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3717-0068</orcidid></search><sort><creationdate>201803</creationdate><title>A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis</title><author>Dehghanian, Effat ; Gheshlaghi, Saman Zare</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2939-ee7d82f981908be3a33409e1cc3a03cac00774345e895bb28acb75dff341bd8f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>AdaBoost.R2</topic><topic>Artificial neural networks</topic><topic>Chemical synthesis</topic><topic>Classification</topic><topic>Data acquisition</topic><topic>Decision trees</topic><topic>Fischer-Tropsch process</topic><topic>Fischer‐Tropsch synthesis</topic><topic>Genetic algorithms</topic><topic>Hydrocarbons</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>multiobjective optimization</topic><topic>Multiple objective analysis</topic><topic>neural network ensemble</topic><topic>Neural networks</topic><topic>Pareto optimality</topic><topic>Prediction models</topic><topic>random forest</topic><topic>Sorting algorithms</topic><topic>stochastic gradient boosting</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dehghanian, Effat</creatorcontrib><creatorcontrib>Gheshlaghi, Saman Zare</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of chemometrics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dehghanian, Effat</au><au>Gheshlaghi, Saman Zare</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis</atitle><jtitle>Journal of chemometrics</jtitle><date>2018-03</date><risdate>2018</risdate><volume>32</volume><issue>3</issue><epage>n/a</epage><issn>0886-9383</issn><eissn>1099-128X</eissn><abstract>Fischer‐Tropsch synthesis (FTS) is an important chemical process that produces a wide range of hydrocarbons. The exact mechanism of FTS is not yet fully understood, so prediction of the FTS products distribution is a not a trivial task. So far, artificial neural network (ANN) has been successfully applied for modeling varieties of chemical processes whenever sufficient and well‐distributed training patterns are available. However, for most chemical processes such as FTS, acquiring such amount of data is very time‐consuming and expensive. In such cases, neural network ensemble (NNE) has shown a significant generalization ability. An NNE is a set of diverse and accurate ANNs trained for the same task, and its output is a combination of outputs of these ANNs. This paper proposes a new NNE approach called NNE‐NSGA‐II that tries to prune this set by a modified nondominated sorting genetic algorithm to achieve an optimum subset according to 2 conflicting objectives, which are minimizing root‐mean‐square error in training and unseen data sets. Finally, a comparative study is performed on a single best ANN, a regular NNE, NNE‐NSGA, and 3 popular ensemble of decision trees called random forest, stochastic gradient boosting, and AdaBoost.R2. The results show that in training data set, stochastic gradient boosting and AdaBoost.R2 have better fitted the samples; however, for the predicted FTS products in unseen data set, NNEs methods specially NNE‐NSGA‐II have considerably improved the generalization ability in comparison with the other competing approaches.
This paper proposes a novel multiobjective approach based on a modified nondominated sorting genetic algorithm for building a neural network ensemble as a predictive model of Fischer‐Tropsch synthesis. Although a comparative study exhibits that some popular ensemble of decision trees such as stochastic gradient boosting and AdaBoost.R2 better fit the samples of training data set than the proposed approach but it has considerably better generalization ability than the compared methods for predicting the samples of unseen data set.</abstract><cop>Chichester</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1002/cem.2969</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0002-3717-0068</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0886-9383 |
ispartof | Journal of chemometrics, 2018-03, Vol.32 (3), p.n/a |
issn | 0886-9383 1099-128X |
language | eng |
recordid | cdi_proquest_journals_2012831338 |
source | Wiley Online Library Journals Frontfile Complete |
subjects | AdaBoost.R2 Artificial neural networks Chemical synthesis Classification Data acquisition Decision trees Fischer-Tropsch process Fischer‐Tropsch synthesis Genetic algorithms Hydrocarbons Machine learning Mathematical models multiobjective optimization Multiple objective analysis neural network ensemble Neural networks Pareto optimality Prediction models random forest Sorting algorithms stochastic gradient boosting Training |
title | A multiobjective approach in constructing a predictive model for Fischer‐Tropsch synthesis |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T08%3A39%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20multiobjective%20approach%20in%20constructing%20a%20predictive%20model%20for%20Fischer%E2%80%90Tropsch%20synthesis&rft.jtitle=Journal%20of%20chemometrics&rft.au=Dehghanian,%20Effat&rft.date=2018-03&rft.volume=32&rft.issue=3&rft.epage=n/a&rft.issn=0886-9383&rft.eissn=1099-128X&rft_id=info:doi/10.1002/cem.2969&rft_dat=%3Cproquest_cross%3E2012831338%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2012831338&rft_id=info:pmid/&rfr_iscdi=true |