A kernel-based multivariate feature selection method for microarray data classification

High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:PloS one 2014-07, Vol.9 (7), p.e102541-e102541
Hauptverfasser: Sun, Shiquan, Peng, Qinke, Shakoor, Adnan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page e102541
container_issue 7
container_start_page e102541
container_title PloS one
container_volume 9
creator Sun, Shiquan
Peng, Qinke
Shakoor, Adnan
description High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA) in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor) on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.
doi_str_mv 10.1371/journal.pone.0102541
format Article
fullrecord <record><control><sourceid>proquest_plos_</sourceid><recordid>TN_cdi_plos_journals_1547308631</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_09539a29da3c4093b7c114d2bd0f9d85</doaj_id><sourcerecordid>3380056311</sourcerecordid><originalsourceid>FETCH-LOGICAL-c526t-fd566218215eb8313c586f03c903140bbdc8a028c8fcc7b052ad20294980d3103</originalsourceid><addsrcrecordid>eNptkk9v1DAQxS0EomXhGyCI1AuXLON_WeeCVFVAK1XiAuJoTWyn9eLEi-1U6rcn202rtuJky37zmzf2I-Q9hTXlG_p5G6c0Yljv4ujWQIFJQV-QY9pyVjcM-MtH-yPyJuctgOSqaV6TIyZBKEnZMfl9Wv1xaXSh7jA7Ww1TKP4Gk8fiqt5hmZKrsgvOFB_HanDlOtqqj6kavEkRU8LbymLBygTM2ffe4F75lrzqMWT3bllX5Ne3rz_PzuvLH98vzk4vayNZU-reyqZhVDEqXac45UaqpgduWuBUQNdZoxCYMqo3ZtOBZGgZsFa0CiynwFfk44G7CzHr5U2yplJsOKhmJq7IxUFhI271LvkB062O6PXdQUxXGlPxJjgNreQtstYiNwJa3m0MpcKyzkLfWiVn1pel29QNzho3loThCfTpzeiv9VW80YLC7EjNgE8LIMW_k8tFDz4bFwKOLk53vmfTXEAzS0-eSf8_nTio5s_IObn-wQwFvc_JfZXe50QvOZnLPjwe5KHoPhj8H_D9u3c</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1547308631</pqid></control><display><type>article</type><title>A kernel-based multivariate feature selection method for microarray data classification</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Public Library of Science (PLoS) Journals Open Access</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Sun, Shiquan ; Peng, Qinke ; Shakoor, Adnan</creator><contributor>Dalby, Andrew R.</contributor><creatorcontrib>Sun, Shiquan ; Peng, Qinke ; Shakoor, Adnan ; Dalby, Andrew R.</creatorcontrib><description>High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA) in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor) on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0102541</identifier><identifier>PMID: 25048512</identifier><language>eng</language><publisher>United States: Public Library of Science</publisher><subject>Adenocarcinoma ; Algorithms ; Artificial Intelligence ; Bioinformatics ; Biology and Life Sciences ; Breast cancer ; Cancer ; Classification ; Classifiers ; Computer and Information Sciences ; Computer applications ; Datasets ; Discriminant analysis ; Engineering ; Gene expression ; Humans ; Least-Squares Analysis ; Lung cancer ; Lymphoma ; Medical research ; Medulloblastoma ; Methods ; Neoplasms - genetics ; Oligonucleotide Array Sequence Analysis - methods ; Order parameters ; Principal components analysis ; Selectors ; Software ; Variables</subject><ispartof>PloS one, 2014-07, Vol.9 (7), p.e102541-e102541</ispartof><rights>2014 Sun et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2014 Sun et al 2014 Sun et al</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c526t-fd566218215eb8313c586f03c903140bbdc8a028c8fcc7b052ad20294980d3103</citedby><cites>FETCH-LOGICAL-c526t-fd566218215eb8313c586f03c903140bbdc8a028c8fcc7b052ad20294980d3103</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4105478/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4105478/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,315,729,782,786,866,887,2106,2932,23875,27933,27934,53800,53802</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/25048512$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Dalby, Andrew R.</contributor><creatorcontrib>Sun, Shiquan</creatorcontrib><creatorcontrib>Peng, Qinke</creatorcontrib><creatorcontrib>Shakoor, Adnan</creatorcontrib><title>A kernel-based multivariate feature selection method for microarray data classification</title><title>PloS one</title><addtitle>PLoS One</addtitle><description>High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA) in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor) on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.</description><subject>Adenocarcinoma</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Bioinformatics</subject><subject>Biology and Life Sciences</subject><subject>Breast cancer</subject><subject>Cancer</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Computer and Information Sciences</subject><subject>Computer applications</subject><subject>Datasets</subject><subject>Discriminant analysis</subject><subject>Engineering</subject><subject>Gene expression</subject><subject>Humans</subject><subject>Least-Squares Analysis</subject><subject>Lung cancer</subject><subject>Lymphoma</subject><subject>Medical research</subject><subject>Medulloblastoma</subject><subject>Methods</subject><subject>Neoplasms - genetics</subject><subject>Oligonucleotide Array Sequence Analysis - methods</subject><subject>Order parameters</subject><subject>Principal components analysis</subject><subject>Selectors</subject><subject>Software</subject><subject>Variables</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2014</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>DOA</sourceid><recordid>eNptkk9v1DAQxS0EomXhGyCI1AuXLON_WeeCVFVAK1XiAuJoTWyn9eLEi-1U6rcn202rtuJky37zmzf2I-Q9hTXlG_p5G6c0Yljv4ujWQIFJQV-QY9pyVjcM-MtH-yPyJuctgOSqaV6TIyZBKEnZMfl9Wv1xaXSh7jA7Ww1TKP4Gk8fiqt5hmZKrsgvOFB_HanDlOtqqj6kavEkRU8LbymLBygTM2ffe4F75lrzqMWT3bllX5Ne3rz_PzuvLH98vzk4vayNZU-reyqZhVDEqXac45UaqpgduWuBUQNdZoxCYMqo3ZtOBZGgZsFa0CiynwFfk44G7CzHr5U2yplJsOKhmJq7IxUFhI271LvkB062O6PXdQUxXGlPxJjgNreQtstYiNwJa3m0MpcKyzkLfWiVn1pel29QNzho3loThCfTpzeiv9VW80YLC7EjNgE8LIMW_k8tFDz4bFwKOLk53vmfTXEAzS0-eSf8_nTio5s_IObn-wQwFvc_JfZXe50QvOZnLPjwe5KHoPhj8H_D9u3c</recordid><startdate>20140721</startdate><enddate>20140721</enddate><creator>Sun, Shiquan</creator><creator>Peng, Qinke</creator><creator>Shakoor, Adnan</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20140721</creationdate><title>A kernel-based multivariate feature selection method for microarray data classification</title><author>Sun, Shiquan ; Peng, Qinke ; Shakoor, Adnan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c526t-fd566218215eb8313c586f03c903140bbdc8a028c8fcc7b052ad20294980d3103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2014</creationdate><topic>Adenocarcinoma</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Bioinformatics</topic><topic>Biology and Life Sciences</topic><topic>Breast cancer</topic><topic>Cancer</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Computer and Information Sciences</topic><topic>Computer applications</topic><topic>Datasets</topic><topic>Discriminant analysis</topic><topic>Engineering</topic><topic>Gene expression</topic><topic>Humans</topic><topic>Least-Squares Analysis</topic><topic>Lung cancer</topic><topic>Lymphoma</topic><topic>Medical research</topic><topic>Medulloblastoma</topic><topic>Methods</topic><topic>Neoplasms - genetics</topic><topic>Oligonucleotide Array Sequence Analysis - methods</topic><topic>Order parameters</topic><topic>Principal components analysis</topic><topic>Selectors</topic><topic>Software</topic><topic>Variables</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sun, Shiquan</creatorcontrib><creatorcontrib>Peng, Qinke</creatorcontrib><creatorcontrib>Shakoor, Adnan</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>Agricultural &amp; Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agricultural Science Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials Science Collection</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Shiquan</au><au>Peng, Qinke</au><au>Shakoor, Adnan</au><au>Dalby, Andrew R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A kernel-based multivariate feature selection method for microarray data classification</atitle><jtitle>PloS one</jtitle><addtitle>PLoS One</addtitle><date>2014-07-21</date><risdate>2014</risdate><volume>9</volume><issue>7</issue><spage>e102541</spage><epage>e102541</epage><pages>e102541-e102541</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA) in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor) on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.</abstract><cop>United States</cop><pub>Public Library of Science</pub><pmid>25048512</pmid><doi>10.1371/journal.pone.0102541</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1932-6203
ispartof PloS one, 2014-07, Vol.9 (7), p.e102541-e102541
issn 1932-6203
1932-6203
language eng
recordid cdi_plos_journals_1547308631
source MEDLINE; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Public Library of Science (PLoS) Journals Open Access; PubMed Central; Free Full-Text Journals in Chemistry
subjects Adenocarcinoma
Algorithms
Artificial Intelligence
Bioinformatics
Biology and Life Sciences
Breast cancer
Cancer
Classification
Classifiers
Computer and Information Sciences
Computer applications
Datasets
Discriminant analysis
Engineering
Gene expression
Humans
Least-Squares Analysis
Lung cancer
Lymphoma
Medical research
Medulloblastoma
Methods
Neoplasms - genetics
Oligonucleotide Array Sequence Analysis - methods
Order parameters
Principal components analysis
Selectors
Software
Variables
title A kernel-based multivariate feature selection method for microarray data classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-02T23%3A24%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20kernel-based%20multivariate%20feature%20selection%20method%20for%20microarray%20data%20classification&rft.jtitle=PloS%20one&rft.au=Sun,%20Shiquan&rft.date=2014-07-21&rft.volume=9&rft.issue=7&rft.spage=e102541&rft.epage=e102541&rft.pages=e102541-e102541&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0102541&rft_dat=%3Cproquest_plos_%3E3380056311%3C/proquest_plos_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1547308631&rft_id=info:pmid/25048512&rft_doaj_id=oai_doaj_org_article_09539a29da3c4093b7c114d2bd0f9d85&rfr_iscdi=true