Book Image

Mastering Numerical Computing with NumPy

By : Umit Mert Cakmak, Tiago Antao, Mert Cuhadaroglu
Book Image

Mastering Numerical Computing with NumPy

By: Umit Mert Cakmak, Tiago Antao, Mert Cuhadaroglu

Overview of this book

NumPy is one of the most important scientific computing libraries available for Python. Mastering Numerical Computing with NumPy teaches you how to achieve expert level competency to perform complex operations, with in-depth coverage of advanced concepts. Beginning with NumPy's arrays and functions, you will familiarize yourself with linear algebra concepts to perform vector and matrix math operations. You will thoroughly understand and practice data processing, exploratory data analysis (EDA), and predictive modeling. You will then move on to working on practical examples which will teach you how to use NumPy statistics in order to explore US housing data and develop a predictive model using simple and multiple linear regression techniques. Once you have got to grips with the basics, you will explore unsupervised learning and clustering algorithms, followed by understanding how to write better NumPy code while keeping advanced considerations in mind. The book also demonstrates the use of different high-performance numerical computing libraries and their relationship with NumPy. You will study how to benchmark the performance of different configurations and choose the best for your system. By the end of this book, you will have become an expert in handling and performing complex data manipulations.
Table of Contents (11 chapters)

Hyperparameters

Hyperparameter could be considered as high-level parameter which determines one of the various properties of a model such as complexity, training behavior and learning rate. These parameters naturally differ from model parameters as they need to be set before training starts.

For example, the k in k-means or k-nearest-neighbors is a hyperparameter for these algorithms. The k in k-means denotes the number of clusters to be found, and the k in k-nearest-neighbors denotes the number of closest records to be used to make predictions.

Tuning hyperparameters is a crucial step in any machine learning project to improve predictive performance. There are different techniques for tuning, such as grid search, randomized search and bayesian optimization, but these techniques are beyond the scope of this chapter.

Let's have a quick look at the k-means algorithms parameters...