Posts

Showing posts from September, 2019

Introduction to Apache Spark with Scala:

Image
Apache Spark is a highly developed engine for data processing on large scale over thousands of compute engines in parallel. This allows maximizing processor capability over these compute engines. Spark has the capability to handle multiple data processing tasks including complex data analytics, streaming analytics, graph analytics as well as scalable machine learning on huge amount of data in the order of Terabytes, Zettabytes and much more. What is Apache Spark? Apache Spark is an open-source cluster computing framework that was initially developed at UC Berkeley in the AMPLab. As compared to the disk-based, two-stage MapReduce of Hadoop, Spark provides up to 100 times faster performance for a few applications with in-memory primitives. This makes it suitable for machine learning algorithms, as it allows programs to load data into the memory of a cluster and query the data constantly. A Spark project contains various components such as Spark Core and Resilient Distributed