Book Image

Mastering Predictive Analytics with Python

By : Joseph Babcock
Book Image

Mastering Predictive Analytics with Python

By: Joseph Babcock

Overview of this book

The volume, diversity, and speed of data available has never been greater. Powerful machine learning methods can unlock the value in this information by finding complex relationships and unanticipated trends. Using the Python programming language, analysts can use these sophisticated methods to build scalable analytic applications to deliver insights that are of tremendous value to their organizations. In Mastering Predictive Analytics with Python, you will learn the process of turning raw data into powerful insights. Through case studies and code examples using popular open-source Python libraries, this book illustrates the complete development process for analytic applications and how to quickly apply these methods to your own data to create robust and scalable prediction services. Covering a wide range of algorithms for classification, regression, clustering, as well as cutting-edge techniques such as deep learning, this book illustrates not only how these methods work, but how to implement them in practice. You will learn to choose the right approach for your problem and how to develop engaging visualizations to bring the insights of predictive modeling to life
Table of Contents (16 chapters)
Mastering Predictive Analytics with Python
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Streaming clustering in Spark


Up to this point, we have mainly demonstrated examples for ad hoc exploratory analysis. In building up analytical applications, we need to begin putting these into a more robust framework. As an example, we will demonstrate the use of a streaming clustering pipeline using PySpark. This application will potentially scale to very large datasets, and we will compose the pieces of the analysis in such a way that it is robust to failure in the case of malformed data.

As we will be using similar examples with PySpark in the following chapters, let's review the key ingredients we need in such application, some of which we already saw in Chapter 2, Exploratory Data Analysis and Visualization in Python. Most PySpark jobs we will create in this book consist of the following steps:

  1. Construct a Spark context. The context contains information about the name of the application, and parameters such as memory and number of tasks.

  2. The Spark context may be used to construct secondary...