Numeric Encoding Options with Automunge
Mainstream practice in machine learning with tabular data may take for granted that any feature engineering beyond scaling for numeric sets is superfluous in context of deep neural networks. This paper will offer arguments for potential benefits of extended encodings of numeric streams in deep learn...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Teague, Nicholas J |
description | Mainstream practice in machine learning with tabular data may take for
granted that any feature engineering beyond scaling for numeric sets is
superfluous in context of deep neural networks. This paper will offer arguments
for potential benefits of extended encodings of numeric streams in deep
learning by way of a survey of options for numeric transformations as available
in the Automunge open source python library platform for tabular data
pipelines, where transformations may be applied to distinct columns in "family
tree" sets with generations and branches of derivations. Automunge
transformation options include normalization, binning, noise injection,
derivatives, and more. The aggregation of these methods into family tree sets
of transformations are demonstrated for use to present numeric features to
machine learning in multiple configurations of varying information content, as
may be applied to encode numeric sets of unknown interpretation. Experiments
demonstrate the realization of a novel generalized solution to data
augmentation by noise injection for tabular learning, as may materially benefit
model performance in applications with underserved training data. |
doi_str_mv | 10.48550/arxiv.2202.09496 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_09496</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_09496</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-99208e518f50a5dfd9be4edd0420292afa821d30137ce1807043741d26be81f73</originalsourceid><addsrcrecordid>eNotzjsPgjAYheEuDkb9AU6yOYFfb7QdDcFLQnRxJ5W22kTAIHj59yo6ne09D0JTDBGTnMNCN09_jwgBEoFiKh6i-a4rbeOLIK2K2vjqFOyvra-rW_Dw7TlYdm1ddtXJjtHA6cvNTv47QodVekg2YbZfb5NlFupYxKFSBKTlWDoOmhtn1NEyawywz6Ui2mlJsKGAqSgsliCAUcGwIfHRSuwEHaHZL9tL82vjS9288q8478X0DXt_OfQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Numeric Encoding Options with Automunge</title><source>arXiv.org</source><creator>Teague, Nicholas J</creator><creatorcontrib>Teague, Nicholas J</creatorcontrib><description>Mainstream practice in machine learning with tabular data may take for
granted that any feature engineering beyond scaling for numeric sets is
superfluous in context of deep neural networks. This paper will offer arguments
for potential benefits of extended encodings of numeric streams in deep
learning by way of a survey of options for numeric transformations as available
in the Automunge open source python library platform for tabular data
pipelines, where transformations may be applied to distinct columns in "family
tree" sets with generations and branches of derivations. Automunge
transformation options include normalization, binning, noise injection,
derivatives, and more. The aggregation of these methods into family tree sets
of transformations are demonstrated for use to present numeric features to
machine learning in multiple configurations of varying information content, as
may be applied to encode numeric sets of unknown interpretation. Experiments
demonstrate the realization of a novel generalized solution to data
augmentation by noise injection for tabular learning, as may materially benefit
model performance in applications with underserved training data.</description><identifier>DOI: 10.48550/arxiv.2202.09496</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.09496$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.09496$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Teague, Nicholas J</creatorcontrib><title>Numeric Encoding Options with Automunge</title><description>Mainstream practice in machine learning with tabular data may take for
granted that any feature engineering beyond scaling for numeric sets is
superfluous in context of deep neural networks. This paper will offer arguments
for potential benefits of extended encodings of numeric streams in deep
learning by way of a survey of options for numeric transformations as available
in the Automunge open source python library platform for tabular data
pipelines, where transformations may be applied to distinct columns in "family
tree" sets with generations and branches of derivations. Automunge
transformation options include normalization, binning, noise injection,
derivatives, and more. The aggregation of these methods into family tree sets
of transformations are demonstrated for use to present numeric features to
machine learning in multiple configurations of varying information content, as
may be applied to encode numeric sets of unknown interpretation. Experiments
demonstrate the realization of a novel generalized solution to data
augmentation by noise injection for tabular learning, as may materially benefit
model performance in applications with underserved training data.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjsPgjAYheEuDkb9AU6yOYFfb7QdDcFLQnRxJ5W22kTAIHj59yo6ne09D0JTDBGTnMNCN09_jwgBEoFiKh6i-a4rbeOLIK2K2vjqFOyvra-rW_Dw7TlYdm1ddtXJjtHA6cvNTv47QodVekg2YbZfb5NlFupYxKFSBKTlWDoOmhtn1NEyawywz6Ui2mlJsKGAqSgsliCAUcGwIfHRSuwEHaHZL9tL82vjS9288q8478X0DXt_OfQ</recordid><startdate>20220218</startdate><enddate>20220218</enddate><creator>Teague, Nicholas J</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220218</creationdate><title>Numeric Encoding Options with Automunge</title><author>Teague, Nicholas J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-99208e518f50a5dfd9be4edd0420292afa821d30137ce1807043741d26be81f73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Teague, Nicholas J</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Teague, Nicholas J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Numeric Encoding Options with Automunge</atitle><date>2022-02-18</date><risdate>2022</risdate><abstract>Mainstream practice in machine learning with tabular data may take for
granted that any feature engineering beyond scaling for numeric sets is
superfluous in context of deep neural networks. This paper will offer arguments
for potential benefits of extended encodings of numeric streams in deep
learning by way of a survey of options for numeric transformations as available
in the Automunge open source python library platform for tabular data
pipelines, where transformations may be applied to distinct columns in "family
tree" sets with generations and branches of derivations. Automunge
transformation options include normalization, binning, noise injection,
derivatives, and more. The aggregation of these methods into family tree sets
of transformations are demonstrated for use to present numeric features to
machine learning in multiple configurations of varying information content, as
may be applied to encode numeric sets of unknown interpretation. Experiments
demonstrate the realization of a novel generalized solution to data
augmentation by noise injection for tabular learning, as may materially benefit
model performance in applications with underserved training data.</abstract><doi>10.48550/arxiv.2202.09496</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2202.09496 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2202_09496 |
source | arXiv.org |
subjects | Computer Science - Learning |
title | Numeric Encoding Options with Automunge |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T08%3A38%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Numeric%20Encoding%20Options%20with%20Automunge&rft.au=Teague,%20Nicholas%20J&rft.date=2022-02-18&rft_id=info:doi/10.48550/arxiv.2202.09496&rft_dat=%3Carxiv_GOX%3E2202_09496%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |