Book Image

Java Data Analysis

By : John R. Hubbard
Book Image

Java Data Analysis

By: John R. Hubbard

Overview of this book

Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the aim of discovering useful information. Java is one of the most popular languages to perform your data analysis tasks. This book will help you learn the tools and techniques in Java to conduct data analysis without any hassle. After getting a quick overview of what data science is and the steps involved in the process, you’ll learn the statistical data analysis techniques and implement them using the popular Java APIs and libraries. Through practical examples, you will also learn the machine learning concepts such as classification and regression. In the process, you’ll familiarize yourself with tools such as Rapidminer and WEKA and see how these Java-based tools can be used effectively for analysis. You will also learn how to analyze text and other types of multimedia. Learn to work with relational, NoSQL, and time-series data. This book will also show you how you can utilize different Java-based libraries to create insightful and easy to understand plots and graphs. By the end of this book, you will have a solid understanding of the various data analysis techniques, and how to implement them using Java.
Table of Contents (20 chapters)
Java Data Analysis
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Apache Hadoop


Apache Hadoop is an open-source software system that allows for the distributed storage and processing of very large datasets. It implements the MapReduce framework.

The system includes these modules:

  • Hadoop Common: The common libraries and utilities that support the other Hadoop modules

  • Hadoop Distributed File System (HDFS™): A distributed filesystem that stores data on commodity machines, providing high-throughput access across the cluster

  • Hadoop YARN: A platform for job scheduling and cluster resource management

  • Hadoop MapReduce: An implementation of the Google MapReduce framework

Hadoop originated as the Google File System in 2003. Its developer, Doug Cutting, named it after his son's toy elephant. By 2006, it had become HDFS, the Hadoop Distributed File System.

In April of 2006, using MapReduce, Hadoop set a record of sorting 1.8 TB of data, distributed in 188 nodes, in under 48 hours. Two years later, it set the world record by sorting one terabyte of data in 209 seconds...