Alpha-Net: Architecture, Models, and Applications
Deep learning network training is usually computationally expensive and intuitively complex. We present a novel network architecture for custom training and weight evaluations. We reformulate the layers as ResNet-similar blocks with certain inputs and outputs of their own, the blocks (called Alpha b...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Shaikh, Jishan Sharma, Adya Chouhan, Ankit Mahawar, Avinash |
description | Deep learning network training is usually computationally expensive and
intuitively complex. We present a novel network architecture for custom
training and weight evaluations. We reformulate the layers as ResNet-similar
blocks with certain inputs and outputs of their own, the blocks (called Alpha
blocks) on their connection configuration form their own network, combined with
our novel loss function and normalization function form the complete Alpha-Net
architecture. We provided the empirical mathematical formulation of network
loss function for more understanding of accuracy estimation and further
optimizations. We implemented Alpha-Net with 4 different layer configurations
to express the architecture behavior comprehensively. On a custom dataset based
on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image
recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3%
respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the
last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present
an analysis of our dataset with 256, 512, and 1024 layers and different
versions of the loss function. Input representation is also crucial for
training as initial preprocessing will take only a handful of features to make
training less complex than it needs to be. We also compared network behavior
with different layer structures, different loss functions, and different
normalization functions for better quantitative modeling of Alpha-Net. |
doi_str_mv | 10.48550/arxiv.2007.07221 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2007_07221</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2007_07221</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-892dce214c04b043308017d938d9aa04fe194eff249c1d525ddac1ee8a52e5323</originalsourceid><addsrcrecordid>eNotzstOwzAQQFFvWKDCB7BqPqAJM2O7sdlFFS-pj0330dSeqJbSNnICgr9HFFZ3d3WUekCojLMWHjl_pc-KAOoKaiK8Vdj0w5HLrUxPRZPDMU0Spo8si2JzidKPi4LPsWiGoU-Bp3Q5j3fqpuN-lPv_ztT-5Xm_eivXu9f3VbMueVlj6TzFIIQmgDmA0RocYB29dtEzg-kEvZGuI-MDRks2Rg4o4tiSWE16puZ_26u5HXI6cf5uf-3t1a5_ANS3PUE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Alpha-Net: Architecture, Models, and Applications</title><source>arXiv.org</source><creator>Shaikh, Jishan ; Sharma, Adya ; Chouhan, Ankit ; Mahawar, Avinash</creator><creatorcontrib>Shaikh, Jishan ; Sharma, Adya ; Chouhan, Ankit ; Mahawar, Avinash</creatorcontrib><description>Deep learning network training is usually computationally expensive and
intuitively complex. We present a novel network architecture for custom
training and weight evaluations. We reformulate the layers as ResNet-similar
blocks with certain inputs and outputs of their own, the blocks (called Alpha
blocks) on their connection configuration form their own network, combined with
our novel loss function and normalization function form the complete Alpha-Net
architecture. We provided the empirical mathematical formulation of network
loss function for more understanding of accuracy estimation and further
optimizations. We implemented Alpha-Net with 4 different layer configurations
to express the architecture behavior comprehensively. On a custom dataset based
on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image
recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3%
respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the
last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present
an analysis of our dataset with 256, 512, and 1024 layers and different
versions of the loss function. Input representation is also crucial for
training as initial preprocessing will take only a handful of features to make
training less complex than it needs to be. We also compared network behavior
with different layer structures, different loss functions, and different
normalization functions for better quantitative modeling of Alpha-Net.</description><identifier>DOI: 10.48550/arxiv.2007.07221</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2020-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2007.07221$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2007.07221$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shaikh, Jishan</creatorcontrib><creatorcontrib>Sharma, Adya</creatorcontrib><creatorcontrib>Chouhan, Ankit</creatorcontrib><creatorcontrib>Mahawar, Avinash</creatorcontrib><title>Alpha-Net: Architecture, Models, and Applications</title><description>Deep learning network training is usually computationally expensive and
intuitively complex. We present a novel network architecture for custom
training and weight evaluations. We reformulate the layers as ResNet-similar
blocks with certain inputs and outputs of their own, the blocks (called Alpha
blocks) on their connection configuration form their own network, combined with
our novel loss function and normalization function form the complete Alpha-Net
architecture. We provided the empirical mathematical formulation of network
loss function for more understanding of accuracy estimation and further
optimizations. We implemented Alpha-Net with 4 different layer configurations
to express the architecture behavior comprehensively. On a custom dataset based
on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image
recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3%
respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the
last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present
an analysis of our dataset with 256, 512, and 1024 layers and different
versions of the loss function. Input representation is also crucial for
training as initial preprocessing will take only a handful of features to make
training less complex than it needs to be. We also compared network behavior
with different layer structures, different loss functions, and different
normalization functions for better quantitative modeling of Alpha-Net.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzstOwzAQQFFvWKDCB7BqPqAJM2O7sdlFFS-pj0330dSeqJbSNnICgr9HFFZ3d3WUekCojLMWHjl_pc-KAOoKaiK8Vdj0w5HLrUxPRZPDMU0Spo8si2JzidKPi4LPsWiGoU-Bp3Q5j3fqpuN-lPv_ztT-5Xm_eivXu9f3VbMueVlj6TzFIIQmgDmA0RocYB29dtEzg-kEvZGuI-MDRks2Rg4o4tiSWE16puZ_26u5HXI6cf5uf-3t1a5_ANS3PUE</recordid><startdate>20200627</startdate><enddate>20200627</enddate><creator>Shaikh, Jishan</creator><creator>Sharma, Adya</creator><creator>Chouhan, Ankit</creator><creator>Mahawar, Avinash</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200627</creationdate><title>Alpha-Net: Architecture, Models, and Applications</title><author>Shaikh, Jishan ; Sharma, Adya ; Chouhan, Ankit ; Mahawar, Avinash</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-892dce214c04b043308017d938d9aa04fe194eff249c1d525ddac1ee8a52e5323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shaikh, Jishan</creatorcontrib><creatorcontrib>Sharma, Adya</creatorcontrib><creatorcontrib>Chouhan, Ankit</creatorcontrib><creatorcontrib>Mahawar, Avinash</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shaikh, Jishan</au><au>Sharma, Adya</au><au>Chouhan, Ankit</au><au>Mahawar, Avinash</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Alpha-Net: Architecture, Models, and Applications</atitle><date>2020-06-27</date><risdate>2020</risdate><abstract>Deep learning network training is usually computationally expensive and
intuitively complex. We present a novel network architecture for custom
training and weight evaluations. We reformulate the layers as ResNet-similar
blocks with certain inputs and outputs of their own, the blocks (called Alpha
blocks) on their connection configuration form their own network, combined with
our novel loss function and normalization function form the complete Alpha-Net
architecture. We provided the empirical mathematical formulation of network
loss function for more understanding of accuracy estimation and further
optimizations. We implemented Alpha-Net with 4 different layer configurations
to express the architecture behavior comprehensively. On a custom dataset based
on ImageNet benchmark, we evaluate Alpha-Net v1, v2, v3, and v4 for image
recognition to give the accuracy of 78.2%, 79.1%, 79.5%, and 78.3%
respectively. The Alpha-Net v3 gives improved accuracy of approx. 3% over the
last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present
an analysis of our dataset with 256, 512, and 1024 layers and different
versions of the loss function. Input representation is also crucial for
training as initial preprocessing will take only a handful of features to make
training less complex than it needs to be. We also compared network behavior
with different layer structures, different loss functions, and different
normalization functions for better quantitative modeling of Alpha-Net.</abstract><doi>10.48550/arxiv.2007.07221</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2007.07221 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2007_07221 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Alpha-Net: Architecture, Models, and Applications |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T16%3A38%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Alpha-Net:%20Architecture,%20Models,%20and%20Applications&rft.au=Shaikh,%20Jishan&rft.date=2020-06-27&rft_id=info:doi/10.48550/arxiv.2007.07221&rft_dat=%3Carxiv_GOX%3E2007_07221%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |