Driving Big Data with Hadoop Tools and Technologies
The core components of Hadoop, namely Hadoop Distributed File System (HDFS), MapReduce, and Yet Another Resource Negotiator (YARN) are explained. This chapter also examines the features of HDFS such as its scalability, reliability, and its robust nature. Apache Hadoop is an open‐source fr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The core components of Hadoop, namely Hadoop Distributed File System (HDFS), MapReduce, and Yet Another Resource Negotiator (YARN) are explained. This chapter also examines the features of HDFS such as its scalability, reliability, and its robust nature. Apache Hadoop is an open‐source framework written in Java that supports processing of large data sets in streaming access pattern across clusters in a distributed computing environment. HBase is a column‐oriented NoSQL database that is a horizontally scalable open‐source distributed database built on top of the HDFS. When the structured data is huge and RDBMS is unable to support the huge data, the data is transferred to HDFS through a tool called SQOOP (SQL to Hadoop). The basic difference flume and SQOOP is that SQOOP is used in ingesting structured data into Hive, HDFS, and HBase, whereas Flume is used to ingest large amounts of streaming data into Hive, HDFS, and HBase. |
---|---|
DOI: | 10.1002/9781119701859.ch5 |