menu

Apache Spark

Apache Spark

posted in Blog categories: Big Data by jayraj

Apache Spark - Lightning-fast cluster computing

Apache Spark is an open source cluster computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.

Apache Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory. The availability of RDDs facilitates the implementation of both iterative algorithms, that visit their dataset multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated database-style querying of data.

Apache Spark requires a cluster manager and a distributed storage system.

For cluster management, Spark supports standalone (native Spark cluster), Hadoop YARN, or Apache Mesos.

For distributed storage, Spark can interface with a wide variety, including Hadoop Distributed File System (HDFS),[6] MapR File System (MapR-FS),[7] Cassandra,[8] OpenStack Swift, Amazon S3, Kudu, or a custom solution can be implemented.

Features -

Speed - Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing.

Ease of Use - Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells.

Generality - Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. It lets us combine these libraries seamlessly in the same application.

Runs Everywhere - You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, or on Apache Mesos. Access data in HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source.

Apache Spark supports following four modules -

Spark Core provides distributed task dispatching, scheduling, and basic I/O functionalities, exposed through an application programming interface (for Java, Python, Scala, and R) centered on the RDD abstraction.

Spark SQL is a component on top of Spark Core that introduces a new data abstraction called DataFrames,[a] which provides support for structured and semi-structured data. Spark SQL provides a domain-specific language to manipulate DataFrames in Scala, Java, or Python. It also provides SQL language support, with command-line interfaces and ODBC/JDBC server.

Spark MLlib is a fast, scalable distributed machine learning framework.

GraphX is a distributed graph processing framework on top of Apache Spark. It provides an API for expressing graph computation that can model the Pregel abstraction. It also provides an optimized runtime for this abstraction.

DataBig Data

27 Jun, 16

 

 

related posts

 

 

latest comments

There are 0 comments on "Apache Spark"

 

 

 

post a comment