Auto Tuning of Hadoop and Spark parameters

Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-11
Hauptverfasser: Patanshetti, Tanuja, Ashish Anil Pawar, Patel, Disha, Thakare, Sanket
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Patanshetti, Tanuja
Ashish Anil Pawar
Patel, Disha
Thakare, Sanket
description Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data Platforms form a single solution. It is a platform that helps to develop, deploy and manage the big data environment. Hadoop and Spark are the two open-source Big Data Platforms provided by Apache. Both these platforms have many configurational parameters, which can have unforeseen effects on the execution time, accuracy, etc. Manual tuning of these parameters can be tiresome, and hence automatic ways should be needed to tune them. After studying and analyzing various previous works in automating the tuning of these parameters, this paper proposes two algorithms - Grid Search with Finer Tuning and Controlled Random Search. The performance indicator studied in this paper is Execution Time. These algorithms help to tune the parameters automatically. Experimental results have shown a reduction in execution time of about 70% and 50% for Hadoop and 81.19% and 77.77% for Spark by Grid Search with Finer Tuning and Controlled Random Search, respectively.
doi_str_mv 10.48550/arxiv.2111.02604
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2111_02604</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2593746693</sourcerecordid><originalsourceid>FETCH-LOGICAL-a523-4126887f0324376e29d6ab6632da5477cf1d35611b463dbdfc41e963f58772d63</originalsourceid><addsrcrecordid>eNotj8FLwzAYxYMgOOb-AE8GvAmdyfclX9rjGOqEgQd7L-mSSKdratqK_vd2m5f3Du_xeD_GbqRYqlxr8WDTT_O9BCnlUgAJdcFmgCizXAFcsUXf74WYAgNa44zdr8Yh8nJsm_adx8A31sXYcds6_tbZ9MEnsQc_-NRfs8tgP3u_-Pc5K58ey_Um274-v6xX28xqwExJoDw3QSAoNOShcGRrIgRntTJmF6RDTVLWitDVLuyU9AVh0Lkx4Ajn7PY8ewKputQcbPqtjkDVCWhq3J0bXYpfo--Hah_H1E6fKtAFGkVUIP4BrwlLBw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2593746693</pqid></control><display><type>article</type><title>Auto Tuning of Hadoop and Spark parameters</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Patanshetti, Tanuja ; Ashish Anil Pawar ; Patel, Disha ; Thakare, Sanket</creator><creatorcontrib>Patanshetti, Tanuja ; Ashish Anil Pawar ; Patel, Disha ; Thakare, Sanket</creatorcontrib><description>Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data Platforms form a single solution. It is a platform that helps to develop, deploy and manage the big data environment. Hadoop and Spark are the two open-source Big Data Platforms provided by Apache. Both these platforms have many configurational parameters, which can have unforeseen effects on the execution time, accuracy, etc. Manual tuning of these parameters can be tiresome, and hence automatic ways should be needed to tune them. After studying and analyzing various previous works in automating the tuning of these parameters, this paper proposes two algorithms - Grid Search with Finer Tuning and Controlled Random Search. The performance indicator studied in this paper is Execution Time. These algorithms help to tune the parameters automatically. Experimental results have shown a reduction in execution time of about 70% and 50% for Hadoop and 81.19% and 77.77% for Spark by Grid Search with Finer Tuning and Controlled Random Search, respectively.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2111.02604</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Big Data ; Computer Science - Distributed, Parallel, and Cluster Computing ; Parameters ; Platforms ; Searching ; Source code ; Tuning</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27908</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.02604$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.14445/22315381/IJETT-V69I11P204$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Patanshetti, Tanuja</creatorcontrib><creatorcontrib>Ashish Anil Pawar</creatorcontrib><creatorcontrib>Patel, Disha</creatorcontrib><creatorcontrib>Thakare, Sanket</creatorcontrib><title>Auto Tuning of Hadoop and Spark parameters</title><title>arXiv.org</title><description>Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data Platforms form a single solution. It is a platform that helps to develop, deploy and manage the big data environment. Hadoop and Spark are the two open-source Big Data Platforms provided by Apache. Both these platforms have many configurational parameters, which can have unforeseen effects on the execution time, accuracy, etc. Manual tuning of these parameters can be tiresome, and hence automatic ways should be needed to tune them. After studying and analyzing various previous works in automating the tuning of these parameters, this paper proposes two algorithms - Grid Search with Finer Tuning and Controlled Random Search. The performance indicator studied in this paper is Execution Time. These algorithms help to tune the parameters automatically. Experimental results have shown a reduction in execution time of about 70% and 50% for Hadoop and 81.19% and 77.77% for Spark by Grid Search with Finer Tuning and Controlled Random Search, respectively.</description><subject>Algorithms</subject><subject>Big Data</subject><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Parameters</subject><subject>Platforms</subject><subject>Searching</subject><subject>Source code</subject><subject>Tuning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj8FLwzAYxYMgOOb-AE8GvAmdyfclX9rjGOqEgQd7L-mSSKdratqK_vd2m5f3Du_xeD_GbqRYqlxr8WDTT_O9BCnlUgAJdcFmgCizXAFcsUXf74WYAgNa44zdr8Yh8nJsm_adx8A31sXYcds6_tbZ9MEnsQc_-NRfs8tgP3u_-Pc5K58ey_Um274-v6xX28xqwExJoDw3QSAoNOShcGRrIgRntTJmF6RDTVLWitDVLuyU9AVh0Lkx4Ajn7PY8ewKputQcbPqtjkDVCWhq3J0bXYpfo--Hah_H1E6fKtAFGkVUIP4BrwlLBw</recordid><startdate>20211104</startdate><enddate>20211104</enddate><creator>Patanshetti, Tanuja</creator><creator>Ashish Anil Pawar</creator><creator>Patel, Disha</creator><creator>Thakare, Sanket</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211104</creationdate><title>Auto Tuning of Hadoop and Spark parameters</title><author>Patanshetti, Tanuja ; Ashish Anil Pawar ; Patel, Disha ; Thakare, Sanket</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a523-4126887f0324376e29d6ab6632da5477cf1d35611b463dbdfc41e963f58772d63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Big Data</topic><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Parameters</topic><topic>Platforms</topic><topic>Searching</topic><topic>Source code</topic><topic>Tuning</topic><toplevel>online_resources</toplevel><creatorcontrib>Patanshetti, Tanuja</creatorcontrib><creatorcontrib>Ashish Anil Pawar</creatorcontrib><creatorcontrib>Patel, Disha</creatorcontrib><creatorcontrib>Thakare, Sanket</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Patanshetti, Tanuja</au><au>Ashish Anil Pawar</au><au>Patel, Disha</au><au>Thakare, Sanket</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Auto Tuning of Hadoop and Spark parameters</atitle><jtitle>arXiv.org</jtitle><date>2021-11-04</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Data of the order of terabytes, petabytes, or beyond is known as Big Data. This data cannot be processed using the traditional database software, and hence there comes the need for Big Data Platforms. By combining the capabilities and features of various big data applications and utilities, Big Data Platforms form a single solution. It is a platform that helps to develop, deploy and manage the big data environment. Hadoop and Spark are the two open-source Big Data Platforms provided by Apache. Both these platforms have many configurational parameters, which can have unforeseen effects on the execution time, accuracy, etc. Manual tuning of these parameters can be tiresome, and hence automatic ways should be needed to tune them. After studying and analyzing various previous works in automating the tuning of these parameters, this paper proposes two algorithms - Grid Search with Finer Tuning and Controlled Random Search. The performance indicator studied in this paper is Execution Time. These algorithms help to tune the parameters automatically. Experimental results have shown a reduction in execution time of about 70% and 50% for Hadoop and 81.19% and 77.77% for Spark by Grid Search with Finer Tuning and Controlled Random Search, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2111.02604</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-11
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2111_02604
source arXiv.org; Free E- Journals
subjects Algorithms
Big Data
Computer Science - Distributed, Parallel, and Cluster Computing
Parameters
Platforms
Searching
Source code
Tuning
title Auto Tuning of Hadoop and Spark parameters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T20%3A54%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Auto%20Tuning%20of%20Hadoop%20and%20Spark%20parameters&rft.jtitle=arXiv.org&rft.au=Patanshetti,%20Tanuja&rft.date=2021-11-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2111.02604&rft_dat=%3Cproquest_arxiv%3E2593746693%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2593746693&rft_id=info:pmid/&rfr_iscdi=true