Book Image

Data Science Algorithms in a Week - Second Edition

By : David Natingga
Book Image

Data Science Algorithms in a Week - Second Edition

By: David Natingga

Overview of this book

Machine learning applications are highly automated and self-modifying, and continue to improve over time with minimal human intervention, as they learn from the trained data. To address the complex nature of various real-world data problems, specialized machine learning algorithms have been developed. Through algorithmic and statistical analysis, these models can be leveraged to gain new knowledge from existing data as well. Data Science Algorithms in a Week addresses all problems related to accurate and efficient data classification and prediction. Over the course of seven days, you will be introduced to seven algorithms, along with exercises that will help you understand different aspects of machine learning. You will see how to pre-cluster your data to optimize and classify it for large datasets. This book also guides you in predicting data based on existing trends in your dataset. This book covers algorithms such as k-nearest neighbors, Naive Bayes, decision trees, random forest, k-means, regression, and time-series analysis. By the end of this book, you will understand how to choose machine learning algorithms for clustering, classification, and regression and know which is best suited for your problem
Table of Contents (16 chapters)
Title Page
Packt Upsell
Contributors
Preface
Glossary of Algorithms and Methods in Data Science
Index

Appendix 3. Glossary of Algorithms and Methods in Data Science

  • k-nearest neighbors algorithm: An algorithm that estimates an unknown data item as being like the majority of the k-closest neighbors to that item.
  • Naive Bayes classifier: A way to classify a data item using Bayes' theorem concerning the conditional probabilities P(A|B)=(P(B|A) * P(A))/P(B). It also assumes that variables in the data are independent, which means that no variable affects the probability of the remaining variables attaining a certain value.
  • Decision tree: A model classifying a data item into one of the classes at the leaf node, based on matching properties between the branches on the tree and the actual data item.
  • Random decision tree: A decision tree in which every branch is formed using only a random subset of the available variables during its construction.
  • Random forest: An ensemble of random decision trees constructed on a random subset of the data with replacement, where a data item is classified to the class with the majority vote from its trees.
  • K-means algorithm: The clustering algorithm that divides a dataset into k groups such that the members in each group are as similar as possible, that is, closest to one another.
  • Regression analysis: A method for estimating the unknown parameters in a functional model that predicts the output variable from the input variables, for example, to estimate a and b in the linear model y=a*x+b.
  • Time series analysis: The analysis of data dependent on time; it mainly includes the analysis of trends and seasonality.
  • Support vector machines: A classification algorithm that finds the hyperplane dividing the training data into given classes. This division by the hyperplane is then used to classify the data further.
  • Principal component analysis: The preprocessing of individual components of given data in order to achieve better accuracy, for example, rescaling of the variables in the input vector depending on how much impact they have on the end result.
  • Text mining: The search and extraction of text, and its possible conversion to numerical data that is used for data analysis.
  • Neural networks: A machine learning algorithm consisting of a network of simple classifiers that make decisions based on the input or the results of the other classifiers in the network.
  • Deep learning: The ability of a neural network to improve its learning process.
  • A priori association rules: The rules that can be observed in training data and, on the basis of which, a classification of the future data can be made.
  • PageRank: A search algorithm that assigns the greatest relevance to the search result that has the greatest number of incoming web links from the most relevant search results for a given search term. In mathematical terms, PageRank calculates a certain eigenvector representing these measures of relevance.
  • Ensemble learning: A method of learning where different learning algorithms are used to reach a final conclusion.
  • Bagging: A method of classifying a data item by the majority vote of the classifiers trained on random subsets of the training data.
  • Genetic algorithms: Machine learning algorithms inspired by genetic processes, for example, an evolution where classifiers with the greatest accuracy are trained further.
  • Inductive inference: A machine learning method for learning the rules that produced the actual data.
  • Bayesian networks: A graph model representing random variables with their conditional dependencies.
  • Singular value decomposition: The factorization of a matrix, a generalization of eigendecomposition, used in the least squares methods.
  • Boosting: A machine learning meta-algorithm that decreases the variance in an estimation by making a prediction based on the ensembles of the classifiers.
  • Expectation maximization: An iterative method for searching the parameters in the model that maximize the accuracy of the prediction of the model.