InferSpark: Statistical Inference at Scale
The Apache Spark stack has enabled fast large-scale data processing. Despite a rich library of statistical models and inference algorithms, it does not give domain users the ability to develop their own models. The emergence of probabilistic programming languages has showed the promise of developing...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Apache Spark stack has enabled fast large-scale data processing. Despite
a rich library of statistical models and inference algorithms, it does not give
domain users the ability to develop their own models. The emergence of
probabilistic programming languages has showed the promise of developing
sophisticated probabilistic models in a succinct and programmatic way. These
frameworks have the potential of automatically generating inference algorithms
for the user defined models and answering various statistical queries about the
model. It is a perfect time to unite these two great directions to produce a
programmable big data analysis framework. We thus propose, InferSpark, a
probabilistic programming framework on top of Apache Spark. Efficient
statistical inference can be easily implemented on this framework and inference
process can leverage the distributed main memory processing power of Spark.
This framework makes statistical inference on big data possible and speed up
the penetration of probabilistic programming into the data engineering domain. |
---|---|
DOI: | 10.48550/arxiv.1707.02047 |