Book Image

Java Data Analysis

By : John R. Hubbard
Book Image

Java Data Analysis

By: John R. Hubbard

Overview of this book

Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the aim of discovering useful information. Java is one of the most popular languages to perform your data analysis tasks. This book will help you learn the tools and techniques in Java to conduct data analysis without any hassle. After getting a quick overview of what data science is and the steps involved in the process, you’ll learn the statistical data analysis techniques and implement them using the popular Java APIs and libraries. Through practical examples, you will also learn the machine learning concepts such as classification and regression. In the process, you’ll familiarize yourself with tools such as Rapidminer and WEKA and see how these Java-based tools can be used effectively for analysis. You will also learn how to analyze text and other types of multimedia. Learn to work with relational, NoSQL, and time-series data. This book will also show you how you can utilize different Java-based libraries to create insightful and easy to understand plots and graphs. By the end of this book, you will have a solid understanding of the various data analysis techniques, and how to implement them using Java.
Table of Contents (20 chapters)
Java Data Analysis
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Bayesian classifiers


The naive Bayes classification algorithm is a classification process that is based upon Bayes' Theorem, which we examined in Chapter 4, Statistics. It is embodied in the formula:

where E and F are events with probabilities P(E) and P(F), is the conditional probability of E given that F is true, and P(F|E) is the conditional probability of F given that E is true. The purpose of this formula is to compute one conditional probability, P(E|F), in terms of its reverse conditional probability P(F|E).

In the context of classification analysis, we assume the population of data points is partitioned into m disjoint categories, C1, C2,..., Cm. Then, for any data point x and any specified category Ci:

The Bayesian algorithm predicts which category Ci the point x is most likely to be in; that is, finding which Ci maximizes P(Ci| x ). But we can see from the formula that that will be the same Ci that maximizes P( x |Ci)P(Ci), since the denominator P(x) is constant.

So that's the algorithm...