Book Image

Numerical Computing with Python

By : Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou
Book Image

Numerical Computing with Python

By: Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou

Overview of this book

Data mining, or parsing the data to extract useful insights, is a niche skill that can transform your career as a data scientist Python is a flexible programming language that is equipped with a strong suite of libraries and toolkits, and gives you the perfect platform to sift through your data and mine the insights you seek. This Learning Path is designed to familiarize you with the Python libraries and the underlying statistics that you need to get comfortable with data mining. You will learn how to use Pandas, Python's popular library to analyze different kinds of data, and leverage the power of Matplotlib to generate appealing and impressive visualizations for the insights you have derived. You will also explore different machine learning techniques and statistics that enable you to build powerful predictive models. By the end of this Learning Path, you will have the perfect foundation to take your data mining skills to the next level and set yourself on the path to become a sought-after data science professional. This Learning Path includes content from the following Packt products: • Statistics for Machine Learning by Pratap Dangeti • Matplotlib 2.x By Example by Allen Yu, Claire Chung, Aldrin Yim • Pandas Cookbook by Theodore Petrou
Table of Contents (21 chapters)
Title Page
Contributors
About Packt
Preface
Index

Tuning of k-value in KNN classifier


In the previous section, we just checked with only the k-value of three. Actually, in any machine learning algorithm, we need to tune the knobs to check where the better performance can be obtained. In the case of KNN, the only tuning parameter is k-value. Hence, in the following code, we are determining the best k-value with grid search:

# Tuning of K- value for Train & Test data 
>>> dummyarray = np.empty((5,3)) 
>>> k_valchart = pd.DataFrame(dummyarray) 
>>> k_valchart.columns = ["K_value","Train_acc","Test_acc"] 
 
>>> k_vals = [1,2,3,4,5] 
 
>>> for i in range(len(k_vals)): 
...     knn_fit = KNeighborsClassifier(n_neighbors=k_vals[i],p=2,metric='minkowski') 
...     knn_fit.fit(x_train,y_train) 
 
...     print ("\nK-value",k_vals[i]) 
     
...     tr_accscore = round(accuracy_score(y_train,knn_fit.predict(x_train)),3) 
...     print ("\nK-Nearest Neighbors - Train ConfusionMatrix\n\n",pd.crosstab...