A parallel, incremental and concurrent GC for servers

Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for 'server-oriented' GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on mult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ossia, Yoav, Ben-Yitzhak, Ori, Goft, Irit, Kolodner, Elliot K, Leikehman, Victor, Owshanko, Avi
Format: Tagungsbericht
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 140
container_issue
container_start_page 129
container_title
container_volume
creator Ossia, Yoav
Ben-Yitzhak, Ori
Goft, Irit
Kolodner, Elliot K
Leikehman, Victor
Owshanko, Avi
description Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for 'server-oriented' GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on multiprocessor hardware, and keeping the number of expensive multi-cycle fence instructions required by weak ordering to a minimum. We designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet these challenges. First, it combines incremental GC to ensure short pause times with concurrent low-priority background GC threads to take advantage of processor idle time. Second, it employs a low-overhead work packet mechanism to enable full parallelism among the incremental and concurrent collecting threads and ensure load balancing. Third, it reduces memory fence instructions by using batching techniques: one fence for each block of small objects allocated, one fence for each group of objects marked, and no fence at all in the write barrier. When compared to the mature well-optimized parallel stop-the-world mark-sweep collector already in the IBM JVM, our collector prototype reduces the maximum pause time from 284 ms to 101 ms, and the average pause time from 266 ms to 66 ms while only losing 10% throughput when running the SPECjbb2000 benchmark on a 256 MB heap on a 4-way 550 MHz Pentium multiprocessor.
doi_str_mv 10.1145/512529.512546
format Conference Proceeding
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_miscellaneous_31466442</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>31466442</sourcerecordid><originalsourceid>FETCH-LOGICAL-a992-5ab967d51ab14e63085df8fd981b461c8326b2ae9add6b885dbf88a1b4490b793</originalsourceid><addsrcrecordid>eNotjE1LAzEUAHNQsFaP3nPy5Na8fDU5lkWrUPDSe3lJXqCS7tak6-93RU8DMzCMPYBYAWjzbEAa6Ve_0PaKLYSysgOlxQ27be1TCKGEdAtmNvyMFUuh8sSPQ6x0ouGCheOQeByHONU6C77teR4rb1S_qbY7dp2xNLr_55LtX1_2_Vu3-9i-95tdh97LzmDwdp0MYABNVglnUnY5eQdBW4hOSRskkseUbHBzDdk5nKP2Iqy9WrLHv-25jl8TtcvhdGyRSsGBxqkdFGhrtZbqB8ktRVU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>31466442</pqid></control><display><type>conference_proceeding</type><title>A parallel, incremental and concurrent GC for servers</title><source>ACM Digital Library</source><creator>Ossia, Yoav ; Ben-Yitzhak, Ori ; Goft, Irit ; Kolodner, Elliot K ; Leikehman, Victor ; Owshanko, Avi</creator><creatorcontrib>Ossia, Yoav ; Ben-Yitzhak, Ori ; Goft, Irit ; Kolodner, Elliot K ; Leikehman, Victor ; Owshanko, Avi</creatorcontrib><description>Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for 'server-oriented' GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on multiprocessor hardware, and keeping the number of expensive multi-cycle fence instructions required by weak ordering to a minimum. We designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet these challenges. First, it combines incremental GC to ensure short pause times with concurrent low-priority background GC threads to take advantage of processor idle time. Second, it employs a low-overhead work packet mechanism to enable full parallelism among the incremental and concurrent collecting threads and ensure load balancing. Third, it reduces memory fence instructions by using batching techniques: one fence for each block of small objects allocated, one fence for each group of objects marked, and no fence at all in the write barrier. When compared to the mature well-optimized parallel stop-the-world mark-sweep collector already in the IBM JVM, our collector prototype reduces the maximum pause time from 284 ms to 101 ms, and the average pause time from 266 ms to 66 ms while only losing 10% throughput when running the SPECjbb2000 benchmark on a 256 MB heap on a 4-way 550 MHz Pentium multiprocessor.</description><identifier>ISSN: 0362-1340</identifier><identifier>DOI: 10.1145/512529.512546</identifier><language>eng</language><ispartof>SIGPLAN notices, 2002, p.129-140</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>310,311,781,785,790,791,23935,23936,25145,27930</link.rule.ids></links><search><creatorcontrib>Ossia, Yoav</creatorcontrib><creatorcontrib>Ben-Yitzhak, Ori</creatorcontrib><creatorcontrib>Goft, Irit</creatorcontrib><creatorcontrib>Kolodner, Elliot K</creatorcontrib><creatorcontrib>Leikehman, Victor</creatorcontrib><creatorcontrib>Owshanko, Avi</creatorcontrib><title>A parallel, incremental and concurrent GC for servers</title><title>SIGPLAN notices</title><description>Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for 'server-oriented' GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on multiprocessor hardware, and keeping the number of expensive multi-cycle fence instructions required by weak ordering to a minimum. We designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet these challenges. First, it combines incremental GC to ensure short pause times with concurrent low-priority background GC threads to take advantage of processor idle time. Second, it employs a low-overhead work packet mechanism to enable full parallelism among the incremental and concurrent collecting threads and ensure load balancing. Third, it reduces memory fence instructions by using batching techniques: one fence for each block of small objects allocated, one fence for each group of objects marked, and no fence at all in the write barrier. When compared to the mature well-optimized parallel stop-the-world mark-sweep collector already in the IBM JVM, our collector prototype reduces the maximum pause time from 284 ms to 101 ms, and the average pause time from 266 ms to 66 ms while only losing 10% throughput when running the SPECjbb2000 benchmark on a 256 MB heap on a 4-way 550 MHz Pentium multiprocessor.</description><issn>0362-1340</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2002</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotjE1LAzEUAHNQsFaP3nPy5Na8fDU5lkWrUPDSe3lJXqCS7tak6-93RU8DMzCMPYBYAWjzbEAa6Ve_0PaKLYSysgOlxQ27be1TCKGEdAtmNvyMFUuh8sSPQ6x0ouGCheOQeByHONU6C77teR4rb1S_qbY7dp2xNLr_55LtX1_2_Vu3-9i-95tdh97LzmDwdp0MYABNVglnUnY5eQdBW4hOSRskkseUbHBzDdk5nKP2Iqy9WrLHv-25jl8TtcvhdGyRSsGBxqkdFGhrtZbqB8ktRVU</recordid><startdate>20020617</startdate><enddate>20020617</enddate><creator>Ossia, Yoav</creator><creator>Ben-Yitzhak, Ori</creator><creator>Goft, Irit</creator><creator>Kolodner, Elliot K</creator><creator>Leikehman, Victor</creator><creator>Owshanko, Avi</creator><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20020617</creationdate><title>A parallel, incremental and concurrent GC for servers</title><author>Ossia, Yoav ; Ben-Yitzhak, Ori ; Goft, Irit ; Kolodner, Elliot K ; Leikehman, Victor ; Owshanko, Avi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a992-5ab967d51ab14e63085df8fd981b461c8326b2ae9add6b885dbf88a1b4490b793</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2002</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Ossia, Yoav</creatorcontrib><creatorcontrib>Ben-Yitzhak, Ori</creatorcontrib><creatorcontrib>Goft, Irit</creatorcontrib><creatorcontrib>Kolodner, Elliot K</creatorcontrib><creatorcontrib>Leikehman, Victor</creatorcontrib><creatorcontrib>Owshanko, Avi</creatorcontrib><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ossia, Yoav</au><au>Ben-Yitzhak, Ori</au><au>Goft, Irit</au><au>Kolodner, Elliot K</au><au>Leikehman, Victor</au><au>Owshanko, Avi</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A parallel, incremental and concurrent GC for servers</atitle><btitle>SIGPLAN notices</btitle><date>2002-06-17</date><risdate>2002</risdate><spage>129</spage><epage>140</epage><pages>129-140</pages><issn>0362-1340</issn><abstract>Multithreaded applications with multi-gigabyte heaps running on modern servers provide new challenges for garbage collection (GC). The challenges for 'server-oriented' GC include: ensuring short pause times on a multi-gigabyte heap, while minimizing throughput penalty, good scaling on multiprocessor hardware, and keeping the number of expensive multi-cycle fence instructions required by weak ordering to a minimum. We designed and implemented a fully parallel, incremental, mostly concurrent collector, which employs several novel techniques to meet these challenges. First, it combines incremental GC to ensure short pause times with concurrent low-priority background GC threads to take advantage of processor idle time. Second, it employs a low-overhead work packet mechanism to enable full parallelism among the incremental and concurrent collecting threads and ensure load balancing. Third, it reduces memory fence instructions by using batching techniques: one fence for each block of small objects allocated, one fence for each group of objects marked, and no fence at all in the write barrier. When compared to the mature well-optimized parallel stop-the-world mark-sweep collector already in the IBM JVM, our collector prototype reduces the maximum pause time from 284 ms to 101 ms, and the average pause time from 266 ms to 66 ms while only losing 10% throughput when running the SPECjbb2000 benchmark on a 256 MB heap on a 4-way 550 MHz Pentium multiprocessor.</abstract><doi>10.1145/512529.512546</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0362-1340
ispartof SIGPLAN notices, 2002, p.129-140
issn 0362-1340
language eng
recordid cdi_proquest_miscellaneous_31466442
source ACM Digital Library
title A parallel, incremental and concurrent GC for servers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T14%3A00%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20parallel,%20incremental%20and%20concurrent%20GC%20for%20servers&rft.btitle=SIGPLAN%20notices&rft.au=Ossia,%20Yoav&rft.date=2002-06-17&rft.spage=129&rft.epage=140&rft.pages=129-140&rft.issn=0362-1340&rft_id=info:doi/10.1145/512529.512546&rft_dat=%3Cproquest%3E31466442%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=31466442&rft_id=info:pmid/&rfr_iscdi=true