Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS

There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the VLDB Endowment 2022-07, Vol.15 (11), p.2491-2503
Hauptverfasser: Yogatama, Bobbi W., Gong, Weiwei, Yu, Xiangyao
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2503
container_issue 11
container_start_page 2491
container_title Proceedings of the VLDB Endowment
container_volume 15
creator Yogatama, Bobbi W.
Gong, Weiwei
Yu, Xiangyao
description There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a semantic-aware fine-grained caching policy which takes into account various aspects of the workload such as query semantics, data correlation, and query frequency when determining data placement between CPU and GPU. Second, we introduce a heterogeneous query executor which can fully exploit data in both CPU and GPU and coordinate query execution at a fine granularity. We integrate both solutions in Mordred, our novel hybrid CPU-GPU data analytics engine. Evaluation on the Star Schema Benchmark shows that the semantic-aware caching policy can outperform the best traditional caching policy by up to 3x. Compared to existing GPU DBMSs, Mordred can outperform by an order of magnitude.
doi_str_mv 10.14778/3551793.3551809
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_14778_3551793_3551809</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_14778_3551793_3551809</sourcerecordid><originalsourceid>FETCH-LOGICAL-c173t-9f83af30303cfee7ceb30bd944c64a94b8b2f7c68577c49886aa342a5efaf08e3</originalsourceid><addsrcrecordid>eNpNkLFOwzAURS0EEqWwM_oHUuzYiZ0RAhSkQitB5ujFeW6DWqfYjkT_ngIZ0B3Ome5wCLnmbMalUvpGZBlXhZj9ULPihExSnrHkqOr0n5-TixA-GMt1zvWEvC692WCIHmLn1rSFCHS_BYM7dJGCa-nngP5A8QvNELve0c7RDUb0_Rod9kOg5apK5quK3t-9vF2SMwvbgFcjp6R6fHgvn5LFcv5c3i4Sw5WISWG1ACvYccYiKoONYE1bSGlyCYVsdJNaZXKdKWVkoXUOIGQKGVqwTKOYEvb3a3wfgkdb7323A3-oOat_e9Rjj3rsIb4B-EpTOQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS</title><source>ACM Digital Library Complete</source><creator>Yogatama, Bobbi W. ; Gong, Weiwei ; Yu, Xiangyao</creator><creatorcontrib>Yogatama, Bobbi W. ; Gong, Weiwei ; Yu, Xiangyao</creatorcontrib><description>There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a semantic-aware fine-grained caching policy which takes into account various aspects of the workload such as query semantics, data correlation, and query frequency when determining data placement between CPU and GPU. Second, we introduce a heterogeneous query executor which can fully exploit data in both CPU and GPU and coordinate query execution at a fine granularity. We integrate both solutions in Mordred, our novel hybrid CPU-GPU data analytics engine. Evaluation on the Star Schema Benchmark shows that the semantic-aware caching policy can outperform the best traditional caching policy by up to 3x. Compared to existing GPU DBMSs, Mordred can outperform by an order of magnitude.</description><identifier>ISSN: 2150-8097</identifier><identifier>EISSN: 2150-8097</identifier><identifier>DOI: 10.14778/3551793.3551809</identifier><language>eng</language><ispartof>Proceedings of the VLDB Endowment, 2022-07, Vol.15 (11), p.2491-2503</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c173t-9f83af30303cfee7ceb30bd944c64a94b8b2f7c68577c49886aa342a5efaf08e3</citedby><cites>FETCH-LOGICAL-c173t-9f83af30303cfee7ceb30bd944c64a94b8b2f7c68577c49886aa342a5efaf08e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Yogatama, Bobbi W.</creatorcontrib><creatorcontrib>Gong, Weiwei</creatorcontrib><creatorcontrib>Yu, Xiangyao</creatorcontrib><title>Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS</title><title>Proceedings of the VLDB Endowment</title><description>There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a semantic-aware fine-grained caching policy which takes into account various aspects of the workload such as query semantics, data correlation, and query frequency when determining data placement between CPU and GPU. Second, we introduce a heterogeneous query executor which can fully exploit data in both CPU and GPU and coordinate query execution at a fine granularity. We integrate both solutions in Mordred, our novel hybrid CPU-GPU data analytics engine. Evaluation on the Star Schema Benchmark shows that the semantic-aware caching policy can outperform the best traditional caching policy by up to 3x. Compared to existing GPU DBMSs, Mordred can outperform by an order of magnitude.</description><issn>2150-8097</issn><issn>2150-8097</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpNkLFOwzAURS0EEqWwM_oHUuzYiZ0RAhSkQitB5ujFeW6DWqfYjkT_ngIZ0B3Ome5wCLnmbMalUvpGZBlXhZj9ULPihExSnrHkqOr0n5-TixA-GMt1zvWEvC692WCIHmLn1rSFCHS_BYM7dJGCa-nngP5A8QvNELve0c7RDUb0_Rod9kOg5apK5quK3t-9vF2SMwvbgFcjp6R6fHgvn5LFcv5c3i4Sw5WISWG1ACvYccYiKoONYE1bSGlyCYVsdJNaZXKdKWVkoXUOIGQKGVqwTKOYEvb3a3wfgkdb7323A3-oOat_e9Rjj3rsIb4B-EpTOQ</recordid><startdate>202207</startdate><enddate>202207</enddate><creator>Yogatama, Bobbi W.</creator><creator>Gong, Weiwei</creator><creator>Yu, Xiangyao</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202207</creationdate><title>Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS</title><author>Yogatama, Bobbi W. ; Gong, Weiwei ; Yu, Xiangyao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c173t-9f83af30303cfee7ceb30bd944c64a94b8b2f7c68577c49886aa342a5efaf08e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yogatama, Bobbi W.</creatorcontrib><creatorcontrib>Gong, Weiwei</creatorcontrib><creatorcontrib>Yu, Xiangyao</creatorcontrib><collection>CrossRef</collection><jtitle>Proceedings of the VLDB Endowment</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yogatama, Bobbi W.</au><au>Gong, Weiwei</au><au>Yu, Xiangyao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS</atitle><jtitle>Proceedings of the VLDB Endowment</jtitle><date>2022-07</date><risdate>2022</risdate><volume>15</volume><issue>11</issue><spage>2491</spage><epage>2503</epage><pages>2491-2503</pages><issn>2150-8097</issn><eissn>2150-8097</eissn><abstract>There has been a growing interest in using GPU to accelerate data analytics due to its massive parallelism and high memory bandwidth. The main constraint of using GPU for data analytics is the limited capacity of GPU memory. Heterogeneous CPU-GPU query execution is a compelling approach to mitigate the limited GPU memory capacity and PCIe bandwidth. However, the design space of heterogeneous CPU-GPU query execution has not been fully explored. We aim to improve state-of-the-art CPU-GPU data analytics engine by optimizing data placement and heterogeneous query execution. First, we introduce a semantic-aware fine-grained caching policy which takes into account various aspects of the workload such as query semantics, data correlation, and query frequency when determining data placement between CPU and GPU. Second, we introduce a heterogeneous query executor which can fully exploit data in both CPU and GPU and coordinate query execution at a fine granularity. We integrate both solutions in Mordred, our novel hybrid CPU-GPU data analytics engine. Evaluation on the Star Schema Benchmark shows that the semantic-aware caching policy can outperform the best traditional caching policy by up to 3x. Compared to existing GPU DBMSs, Mordred can outperform by an order of magnitude.</abstract><doi>10.14778/3551793.3551809</doi><tpages>13</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2150-8097
ispartof Proceedings of the VLDB Endowment, 2022-07, Vol.15 (11), p.2491-2503
issn 2150-8097
2150-8097
language eng
recordid cdi_crossref_primary_10_14778_3551793_3551809
source ACM Digital Library Complete
title Orchestrating data placement and query execution in heterogeneous CPU-GPU DBMS
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T07%3A51%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Orchestrating%20data%20placement%20and%20query%20execution%20in%20heterogeneous%20CPU-GPU%20DBMS&rft.jtitle=Proceedings%20of%20the%20VLDB%20Endowment&rft.au=Yogatama,%20Bobbi%20W.&rft.date=2022-07&rft.volume=15&rft.issue=11&rft.spage=2491&rft.epage=2503&rft.pages=2491-2503&rft.issn=2150-8097&rft.eissn=2150-8097&rft_id=info:doi/10.14778/3551793.3551809&rft_dat=%3Ccrossref%3E10_14778_3551793_3551809%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true