Book Image

Java Data Science Cookbook

By : Rushdi Shams
Book Image

Java Data Science Cookbook

By: Rushdi Shams

Overview of this book

If you are looking to build data science models that are good for production, Java has come to the rescue. With the aid of strong libraries such as MLlib, Weka, DL4j, and more, you can efficiently perform all the data science tasks you need to. This unique book provides modern recipes to solve your common and not-so-common data science-related problems. We start with recipes to help you obtain, clean, index, and search data. Then you will learn a variety of techniques to analyze, learn from, and retrieve information from data. You will also understand how to handle big data, learn deeply from data, and visualize data. Finally, you will work through unique recipes that solve your problems while taking data science to production, writing distributed data science applications, and much more - things that will come in handy at work.
Table of Contents (16 chapters)
Java Data Science Cookbook
About the Author
About the Reviewer
Customer Feedback


In this chapter, you will see three key technologies used in Big Data framework, which are extremely useful for data scientists: Apache Mahout, Apache Spark, and its machine learning library named MLib.

We will start our chapter with Apache Mahout--a scalable or distributed machine learning platform for classification, regression, clustering, and collaborative filtering tasks. Mahout started as a machine learning workbench that works only on Hadoop MapReduce but eventually selected Apache Spark as its platform.

Apache Spark is a framework that brings in parallelization in Big Data processing and has similarity with MapReduce as it also distributes data across clusters. But one key difference between Spark and MapReduce is the prior attempts to keep things in memory as much as possible while the latter writes and reads continuously from the disks. Therefore, Spark is much faster than MapReduce. We will see how you, as a data scientist, can use Spark to do simple text-mining related...