Book Image

Machine Learning Algorithms - Second Edition

Book Image

Machine Learning Algorithms - Second Edition

Overview of this book

Machine learning has gained tremendous popularity for its powerful and fast predictions with large datasets. However, the true forces behind its powerful output are the complex algorithms involving substantial statistical analysis that churn large datasets and generate substantial insight. This second edition of Machine Learning Algorithms walks you through prominent development outcomes that have taken place relating to machine learning algorithms, which constitute major contributions to the machine learning process and help you to strengthen and master statistical interpretation across the areas of supervised, semi-supervised, and reinforcement learning. Once the core concepts of an algorithm have been covered, you’ll explore real-world examples based on the most diffused libraries, such as scikit-learn, NLTK, TensorFlow, and Keras. You will discover new topics such as principal component analysis (PCA), independent component analysis (ICA), Bayesian regression, discriminant analysis, advanced clustering, and gaussian mixture. By the end of this book, you will have studied machine learning algorithms and be able to put them into production to make your machine learning applications more innovative.
Table of Contents (19 chapters)

Machine learning and big data

Another area that can be exploited using machine learning is big data. After the first release of Apache Hadoop, which implemented an efficient MapReduce algorithm, the amount of information managed in different business contexts grew exponentially. At the same time, the opportunity to use it for machine learning purposes arose and several applications such as mass collaborative filtering became a reality.

Imagine an online store with 1 million users and only 1,000 products. Consider a matrix where each user is associated with every product by an implicit or explicit ranking. This matrix will contain 1,000,000 x 1,000 cells, and even if the number of products is very limited, any operation performed on it will be slow and memory-consuming. Instead, using a cluster, together with parallel algorithms, such a problem disappears, and operations with a higher dimensionality can be carried out in a very short time.

Think about training an image classifier with 1 million samples. A single instance needs to iterate several times, processing small batches of pictures. Even if this problem can be performed using a streaming approach (with a limited amount of memory), it's not surprising to wait even for a few days before the model begins to perform well. Adopting a big data approach instead, it's possible to asynchronously train several local models, periodically share the updates, and re-synchronize them all with a master model. This technique has also been exploited to solve some reinforcement learning problems, where many agents (often managed by different threads) played the same game, providing their periodical contribution to a global intelligence.

Not every machine learning problem is suitable for big data, and not all big datasets are really useful when training models. However, their conjunction in particular situations can lead to extraordinary results by removing many limitations that often affect smaller scenarios. Unfortunately, both machine learning and big data are topics subject to continuous hype, hence one of the tasks that an engineer/scientist has to accomplish is understanding when a particular technology is really helpful and when its burden can be heavier than the benefits. Modern computers often have enough resources to process datasets that, a few years ago, were easily considered big data. Therefore, I invite the reader to carefully analyze each situation and think about the problem from a business viewpoint as well. A Spark cluster has a cost that is sometimes completely unjustified. I've personally seen clusters of two medium machines running tasks that a laptop could have carried out even faster. Hence, always perform a descriptive/prescriptive analysis of the problem and the data, trying to focus on the following:

  • The current situation
  • Objectives (what do we need to achieve?)
  • Data and dimensionality (do we work with batch data? Do we have incoming streams?)
  • Acceptable delays (do we need real-time? Is it possible to process once a day/week?)

Big data solutions are justified, for example, when the following is the case:

  • The dataset cannot fit in the memory of a high-end machine
  • The incoming data flow is huge, continuous, and needs prompt computations (for example, clickstreams, web analytics, message dispatching, and so on)
  • It's not possible to split the data into small chunks because the acceptable delays are minimal (this piece of information must be mathematically quantified)
  • The operations can be parallelized efficiently (nowadays, many important algorithms have been implemented in distributed frameworks, but there are still tasks that cannot be processed by using parallel architectures)

In the chapter dedicated to recommendation systems, Chapter 12, Introduction to Recommendation Systems, we're going to discuss how to implement collaborative filtering using Apache Spark. The same framework will also be adopted for an example of Naive Bayes classification.

If you want to know more about the whole Hadoop ecosystem, visit http://hadoop.apache.org. Apache Mahout (http://mahout.apache.org) is a dedicated machine learning framework, and Spark (http://spark.apache.org), one the fastest computational engines, has a module called Machine Learning Library (MLlib) which implements many common algorithms that benefit from parallel processing.