Book Image

Mastering Big Data Analytics with PySpark [Video]

By : Danny Meijer
Book Image

Mastering Big Data Analytics with PySpark [Video]

By: Danny Meijer

Overview of this book

PySpark helps you perform data analysis at-scale; it enables you to build more scalable analyses and pipelines. This course starts by introducing you to PySpark's potential for performing effective analyses of large datasets. You'll learn how to interact with Spark from Python and connect Jupyter to Spark to provide rich data visualizations. After that, you'll delve into various Spark components and its architecture. You'll learn to work with Apache Spark and perform ML tasks more smoothly than before. Gathering and querying data using Spark SQL, to overcome challenges involved in reading it. You'll use the DataFrame API to operate with Spark MLlib and learn about the Pipeline API. Finally, we provide tips and tricks for deploying your code and performance tuning. By the end of this course, you will not only be able to perform efficient data analytics but will have also learned to use PySpark to easily analyze large datasets at-scale in your organization. All related code files are placed on a GitHub repository at: https://github.com/PacktPublishing/Mastering-Big-Data-Analytics-with-PySpark
Table of Contents (9 chapters)
Free Chapter
1
Python and Spark: A Match Made in Heaven
Chapter 5
Classification and Regression
Content Locked
Section 4
Parameters, Features, and Persistence
Apache Spark's MLlib has a built-in way of handling parameters; allowing to set, tune, read, and deal with them centrally. In this video, we will explore this concept and see how Spark has unified APIs across its rich set of algorithms (the features module) as well as learning about pipeline persistence.