Scalable programming with Scala and Spark
- Learning Style
- Course Duration
Select A Class Schedule
What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Scala: Scala is a general purpose programming language - like Java or C++. It's functional programming nature and the availability of a REPL environment make it particularly suited for a distributed computing framework like Spark.
Analytics: Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.
Scala Programming Constructs: Classes, Traits, First Class Functions, Closures, Currying, Case Classes
Lot's of cool stuff ..
- Music Recommendations using Alternating Least Squares and the Audioscrobbler dataset
- Dataframes and Spark SQL to work with Twitter data
- Using the PageRank algorithm with Google web graph dataset
- Using Spark Streaming for stream processing
- Working with graph data using the Marvel Social network dataset
.. and of course all the Spark basic and advanced features:
- Resilient Distributed Datasets, Transformations (map, filter, flatMap), Actions (reduce, aggregate)
- Pair RDDs , reduceByKey, combineByKey
- Broadcast and Accumulator variables
- Spark for MapReduce
- The Java API for Spark
- Spark SQL, Spark Streaming, MLlib and GraphX
- Engineers who want to use a distributed computing engine for batch or stream processing or both
- Analysts who want to leverage Spark for analyzing interesting datasets
- Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
- All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode).
- The course assumes experience with one of the popular object-oriented programming languages like Java/C++
Self-Paced Learning Outline
- You, This Course and Us
- Introduction to Spark
- Resilient Distributed Datasets
- Advanced RDDs: Pair Resilient Distributed Datasets
- Advanced Spark: Accumulators, Spark Submit, MapReduce , Behind The Scenes
- PageRank: Ranking Search Results
- Spark SQL
- MLlib in Spark: Build a recommendations engine
- Spark Streaming
- Graph Libraries
- Scala Language Primer
- Supplementary Installs
|Learning Style||Self-Paced Learning|
|Course Duration||9 Hours|