Book Image

PySpark for Beginners [Video]

By : Tomasz Drabas
Book Image

PySpark for Beginners [Video]

By: Tomasz Drabas

Overview of this book

Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This course will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark. You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command. By the end of this course, you will have established a firm understanding of the Spark Python API and how it can be used to build data-intensive applications. All the code and supporting files for this course are available on Github at https://github.com/PacktPublishing/PySpark-for-Beginners
Table of Contents (6 chapters)
Chapter 6
Introducing the ML Package
Content Locked
Section 2
Parameter Hyper-Tuning
A concept of parameter hyper-tuning is to find the best parameters of the model. For example, the maximum number of iterations needed to properly estimate the logistic regression model or maximum depth of a decision tree. In this video, we will explore two concepts that allow us to find the best parameters for our models that is grid search and train-validation splitting. - Understand grid search with example - Study train validation splitting with example