Book Image

Python Machine Learning By Example - Second Edition

By : Yuxi (Hayden) Liu
Book Image

Python Machine Learning By Example - Second Edition

By: Yuxi (Hayden) Liu

Overview of this book

The surge in interest in machine learning (ML) is due to the fact that it revolutionizes automation by learning patterns in data and using them to make predictions and decisions. If you’re interested in ML, this book will serve as your entry point to ML. Python Machine Learning By Example begins with an introduction to important ML concepts and implementations using Python libraries. Each chapter of the book walks you through an industry adopted application. You’ll implement ML techniques in areas such as exploratory data analysis, feature engineering, and natural language processing (NLP) in a clear and easy-to-follow way. With the help of this extended and updated edition, you’ll understand how to tackle data-driven problems and implement your solutions with the powerful yet simple Python language and popular Python packages and tools such as TensorFlow, scikit-learn, gensim, and Keras. To aid your understanding of popular ML algorithms, the book covers interesting and easy-to-follow examples such as news topic modeling and classification, spam email detection, stock price forecasting, and more. By the end of the book, you’ll have put together a broad picture of the ML ecosystem and will be well-versed with the best practices of applying ML techniques to make the most out of new opportunities.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: Fundamentals of Machine Learning
3
Section 2: Practical Python Machine Learning By Example
12
Section 3: Python Machine Learning Best Practices

Training a logistic regression model

Now, the question is how we can obtain the optimal w such that is minimized. We can do so using gradient descent:

Training a logistic regression model using gradient descent

Gradient descent (also called steepest descent) is a procedure of minimizing an objective function by first-order iterative optimization. In each iteration, it moves a step that is proportional to the negative derivative of the objective function at the current point. This means the to-be-optimal point iteratively moves downhill towards the minimal value of the objective function. The proportion we just mentioned is called learning rate, or step size. It can be summarized in a mathematical equation as follows:

Here...