Book Image

Scala for Machine Learning, Second Edition - Second Edition

Book Image

Scala for Machine Learning, Second Edition - Second Edition

Overview of this book

The discovery of information through data clustering and classification is becoming a key differentiator for competitive organizations. Machine learning applications are everywhere, from self-driving cars, engineering design, logistics, manufacturing, and trading strategies, to detection of genetic anomalies. The book is your one stop guide that introduces you to the functional capabilities of the Scala programming language that are critical to the creation of machine learning algorithms such as dependency injection and implicits. You start by learning data preprocessing and filtering techniques. Following this, you'll move on to unsupervised learning techniques such as clustering and dimension reduction, followed by probabilistic graphical models such as Naïve Bayes, hidden Markov models and Monte Carlo inference. Further, it covers the discriminative algorithms such as linear, logistic regression with regularization, kernelization, support vector machines, neural networks, and deep learning. You’ll move on to evolutionary computing, multibandit algorithms, and reinforcement learning. Finally, the book includes a comprehensive overview of parallel computing in Scala and Akka followed by a description of Apache Spark and its ML library. With updated codes based on the latest version of Scala and comprehensive examples, this book will ensure that you have more than just a solid fundamental knowledge in machine learning with Scala.
Table of Contents (27 chapters)
Scala for Machine Learning Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Probabilistic graphical models


Naïve Bayes qualifies as a very simple probabilistic graphical model, which is commonly visualized as a directed graph for which a vertice is a prior or posterior probability and the edge is a conditional probability.

Given two events or observations X, Y, the joint probability of X and Y is defined as p(X,Y) = p(X∩Y). If the observations X and Y are not related, an assumption known as conditional independence, then p(X,Y)=p(X).p(Y). The conditional probability of event Y given X is defined as p(Y|X) = p(X,Y)/p(X).

It is obvious that conditional or joint probabilities involving a large number of variables (that is, p(X,Y,U,V,W | A,B)), can be difficult to interpret. As a picture worth a thousand words, researchers introduced graphical models to describe probabilistic relation between random variables using graphs [5:1].

There are two categories of graphs and therefore graphical models:

  • Directed graphs such as Bayesian networks

  • Undirected graphs such as Conditional...