Book Image

Training Systems using Python Statistical Modeling

By : Curtis Miller
Book Image

Training Systems using Python Statistical Modeling

By: Curtis Miller

Overview of this book

Python's ease-of-use and multi-purpose nature has made it one of the most popular tools for data scientists and machine learning developers. Its rich libraries are widely used for data analysis, and more importantly, for building state-of-the-art predictive models. This book is designed to guide you through using these libraries to implement effective statistical models for predictive analytics. You’ll start by delving into classical statistical analysis, where you will learn to compute descriptive statistics using pandas. You will focus on supervised learning, which will help you explore the principles of machine learning and train different machine learning models from scratch. Next, you will work with binary prediction models, such as data classification using k-nearest neighbors, decision trees, and random forests. The book will also cover algorithms for regression analysis, such as ridge and lasso regression, and their implementation in Python. In later chapters, you will learn how neural networks can be trained and deployed for more accurate predictions, and understand which Python libraries can be used to implement them. By the end of this book, you will have the knowledge you need to design, build, and deploy enterprise-grade statistical models for machine learning using Python and its rich ecosystem of libraries for predictive analytics.
Table of Contents (9 chapters)

Naive Bayes classifier

Now, we will look at the Naive Bayes classifier. We will begin by looking at how the classifier works. Then, we will look at the linear separability assumptions that are used not only by the Naive Bayes algorithm, but also other important classifiers. Finally, we will train the classifier on the Titanic dataset.

The Naive Bayes classifier's big idea is to assume that the features used for prediction are independent of one other, but not of the class that we are trying to predict. To make a prediction, we estimate the likelihood that a data point would have the observed values for its features, given each possible class it belongs to. We predict the class that maximizes this likelihood.

Training consists of estimating the quantities that are used in forming these likelihoods. The independence assumption makes this task relatively painless. The Naive...