Book Image

Big Data Analytics with R

By : Simon Walkowiak
Book Image

Big Data Analytics with R

By: Simon Walkowiak

Overview of this book

Big Data analytics is the process of examining large and complex data sets that often exceed the computational capabilities. R is a leading programming language of data science, consisting of powerful functions to tackle all problems related to Big Data processing. The book will begin with a brief introduction to the Big Data world and its current industry standards. With introduction to the R language and presenting its development, structure, applications in real world, and its shortcomings. Book will progress towards revision of major R functions for data management and transformations. Readers will be introduce to Cloud based Big Data solutions (e.g. Amazon EC2 instances and Amazon RDS, Microsoft Azure and its HDInsight clusters) and also provide guidance on R connectivity with relational and non-relational databases such as MongoDB and HBase etc. It will further expand to include Big Data tools such as Apache Hadoop ecosystem, HDFS and MapReduce frameworks. Also other R compatible tools such as Apache Spark, its machine learning library Spark MLlib, as well as H2O.
Table of Contents (16 chapters)
Big Data Analytics with R
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface

Big Data toolbox - dealing with the giant


Just like doctors cannot treat all medical symptoms with generic paracetamol and ibuprofen, data scientists need to use more potent methods to store and manage vast amounts of data. Knowing already how Big Data can be defined, and what requirements have to be met in order to qualify data as Big, we can now take a step forward and introduce a number of tools that are specialized in dealing with these enormous data sets. Although traditional techniques may still be valid in certain circumstances, Big Data comes with its own ecosystem of scalable frameworks and applications that  facilitate the processing and management of unusually large or fast data. In this chapter, we will briefly present several most common Big Data tools, which will be further explored in greater detail later on in the book.

Hadoop - the elephant in the room

If you have been in the Big Data industry for as little as one day, you surely must have heard the unfamiliar sounding word Hadoop, at least every third sentence during frequent tea break discussions with your work colleagues or fellow students. Named after Doug Cutting's child's favorite toy, a yellow stuffed elephant, Hadoop has been with us for nearly 11 years. Its origins began around the year 2002 when Doug Cutting was commissioned to lead the Apache Nutch project-a scalable open source search engine. Several months into the project, Cutting and his colleague Mike Cafarella (then a graduate student at University of Washington) ran into serious problems with the scaling up and robustness of their Nutch framework owing to growing storage and processing needs. The solution came from none other than Google, and more precisely from a paper titled The Google File System authored by Ghemawat, Gobioff, and Leung, and published in the proceedings of the 19th ACM Symposium on Operating Systems Principles. The article revisited the original idea of Big Files invented by Larry Page and Sergey Brin, and proposed a revolutionary new method of storing large files partitioned into fixed-size 64 MB chunks across many nodes of the cluster built from cheap commodity hardware. In order to prevent failures and improve efficiency of this setup, the file system creates copies of chunks of data, and distributs them across a number of nodes, which were in turn mapped and managed by a master server. Several months later, Google surprised Cutting and Cafarella with another groundbreaking research article known as MapReduce: Simplified Data Processing on Large Clusters, written by Dean and Ghemawat, and published in the Proceedings of the 6th Conference on Symposium on Operating Systems Design and Implementation.

The MapReduce framework became a kind of mortar between bricks, in the form of data distributed across numerous nodes in the file system, and the outputs of data transformations and processing tasks.

The MapReduce model contains three essential stages. The first phase is the Mapping procedure, which includes indexing and sorting data into the desired structure based on the specified key-value pairs of the mapper (that is, a script doing the mapping). The Shuffle stage is responsible for the redistribution of the mapper's outputs across nodes, depending on the key; that is, the outputs for one specific key are stored on the same node. The Reduce stage results in producing a kind of summary output of the previously mapped and shuffled data, for example, a descriptive statistic such as the arithmetic mean for a continuous measurement by each key (for example a categorical variable). A simplified data processing workflow, using the MapReduce framework in Google and Distributed File System, is presented in the following figure:

A diagram depicting a simplified Distributed File System architecture and stages of the MapReduce framework

The ideas of the Google File System model, and the MapReduce paradigm, resonated very well with Cutting and Cafarella's plans, and they introduced both frameworks into their own research on Nutch. For the first time their web crawler algorithm could be run in parallel on several commodity machines with minimal supervision from a human engineer.

In 2006, Cutting moved to Yahoo! and in 2008, Hadoop became a separate Nutch independent Apache project. Since then, it's been on a never-ending journey towards greater reliability and scalability to allow bigger and faster workloads of data to be effectively crunched by gradually increasing node numbers. In the meantime, Hadoop has also become available as an add-on service on leading cloud computing platforms such as Microsoft Azure, Amazon Elastic Cloud Computing (EC2), or Google Cloud Platform. This new, unrestricted, and flexible way of accessing shared and affordable commodity hardware, enabled numerous companies as well as individual data scientists and developers to dramatically cut their production costs and process larger than ever data in a more efficient and robust manner. A few Hadoop record-breaking milestones are worth mentioning at this point. In the well-known real-world example at the end of 2007, the New York Times was able to convert more than 4TB of images, within less than 24 hours, using a cluster built of 100 nodes on Amazon EC2, for as little as $200. A job that would have potentially taken weeks of hard labor, and a considerable amount of working man-hours, could now be achieved at a fraction of the original cost, in a significantly shorter scope of time. A year later 1TB of data was already sorted in 209 seconds and in 2009, Yahoo! set a new record by sorting the same amount of data in just 62 seconds. In 2013, Hadoop reached its best Gray Sort Benchmark ( http://sortbenchmark.org/ ) score so far. Using 2,100 nodes, Thomas Graves from Yahoo! was able to sort 102.5TB in 4,328 seconds, so roughly 1.42TB per minute.

In recent years, the Hadoop and MapReduce frameworks have been extensively used by the largest technology businesses such as Facebook, Google, Yahoo!, major financial and insurance companies, research institutes, and Academia, as well as Big Data individual enthusiasts. A number of enterprises offering commercial distributions of Hadoop such as Cloudera (headed by Tom Reilly with Doug Cutting as Chief Architect) or Hortonworks (currently led by Rob Bearden-a former senior officer at Oracle and CEO of numerous successful open source projects such as SpringSource and JBoss, and previously run by Eric Baldeschwieler-a former Yahoo! engineer working with Cutting) have spun off from the original Hadoop project, and evolved into separate entities, providing additional Big Data proprietary tools, and extending the applications and usability of Hadoop ecosystem. Although MapReduce and Hadoop revolutionized the way we process vast quantities of data, and hugely propagated Big Data analytics throughout the business world and individual users alike, they also received some criticism for their still present performance limitations, reliability issues, and certain programming difficulties. We will explore these limitations and explain other Hadoop and MapReduce features in much greater detail using practical examples in Chapter 4, Hadoop and MapReduce Framework for R.

Databases

There are many excellent online and offline resources and publications available to readers, on both SQL-based Relational Database Management System (RDBMS), and more modern non-relational and Not Only SQL (NoSQL) databases. This book will not attempt to describe each and every one of them with a high level of detail, but in turn it will provide you with several practical examples on how to store large amounts of information in such systems, carry out essential data crunching and processing of the data using known and tested R packages, and extract the outputs of these Big Data transformations from databases directly into your R session.

As mentioned earlier, in Chapter 5, R with Relational Database Management System (RDBMSs), we will begin with a very gentle introduction to standard and more traditional databases built on the relational model developed in 1970s by Oxford-educated English computer scientist Edgar Codd, while working at IBM's laboratories in San Jose. Don't worry if you have not got too much experience with databases yet. At this stage, it is only important for you to know that, in the RDBMS, the data is stored in a structured way in the form of tables with fields and records. Depending on the specific industry that you come from, fields can be understood as either variables or columns, and records may alternatively be referred to as observations or rows of data. In other words, fields are the smallest units of information and records are collections of these fields. Fields, like variables, come with certain attributes assigned to them, for example, they may contain only numeric or string values, double or long, and so on. These attributes can be set when inputting the data into the database. The RDBMS have proved extremely popular and most (if not all) currently functioning enterprises that collect and store large quantities of data have some sort of relational databases in place. The RDBMS can be easily queried using the Structured Query Language (SQL)-an accessible and quite natural database management programming language, firstly created at IBM by Donald Chamberlin and Raymond Boyce, and later commercialized and further developed by Oracle. Since the original birth of the first RDBMS, they have evolved into fully supported commercial products with Oracle, IBM, and Microsoft being in control of almost 90% of the total RDBMS market share. In our examples of R's connectivity with RDBMS ( Chapter 5, R with Relational Database Management System (RDBMSs)) we will employ a number of the most popular relational and open source databases available to users, including MySQL, PostgreSQL, SQLite, and MariaDB.

However, this is not where we are going to end our journey through the exciting landscape of databases and their Big Data applications. Although RDBMS perform very well in the cases of heavy transactional load, and their ability to process quite complex SQL queries is one of their greatest advantages, they are not so good with (near) real-time and streaming data. Also they generally do not support unstructured and hierarchical data, and they are not easily horizontally scalable. In response to these needs, a new type of database has recently evolved, or to be more precise, it was revived from a long state of inactivity as non-relational databases were known and in use, parallel with RDBMS, even forty years ago, but they never became as popular. NoSQL and non-relational databases, unlike SQL-based RDBMS, come with no predefined schema, thus allowing the users a much needed flexibility without altering the data. They generally scale horizontally very well and are praised for the speed of processing, making them ideal storage solutions for (near) real-time analytics in such industries as retail, marketing, and financial services. They also come with their own flavors of SQL like queries; some of them, for example, the MongoDB NoSQL language, are very expressive and allow users to carry out most data transformations and manipulations as well as complex data aggregation pipelines or even database-specific MapReduce implementations. The rapid growth of interactive web-based services, social media, and streaming data products resulted in a large array of such purpose-specific NoSQL databases being developed. In Chapter 6, R with Non-Relational and (NoSQL) Databases, we will present several examples of Big Data analytics with R using data stored in three leading open source non-relational databases: a popular document-based NoSQL, MongoDB, and a distributed Apache Hadoop-complaint HBase database.

Hadoop Spark-ed up

In the section Hadoop - the elephant in the room, we introduced you to the essential basics of Hadoop, its Hadoop Distributed File System (HDFS) and the MapReduce paradigm. Despite the huge popularity of Hadoop in both academic and industrial settings, many users complain that Hadoop is generally quite slow and some computationally demanding data processing operations can take hours to complete. Spark, which makes use of, and is deployed on top of, the existing HDFS, has been designed to excel in iterative calculations in order to fine-tune Hadoop's in-memory performance by up to 100 times faster, and about 10 times faster when run on disk.

Spark comes with its own small but growing ecosystem of additional tools and applications that support large-scale processing of structured data by implementing SQL queries into Spark programs (through Spark SQL), enabling fault-tolerant operations on streaming data (Spark Streaming), allowing users to perform sophisticated machine-learning models (MLlib), and carrying out out-of-the-box parallel community detection algorithms such as PageRank, label propagation, and many others on graphs and collections through the GraphX module. Owing to the open source, community-run Apache status of the project, Spark has already attracted an enormous interest from people involved in Big Data analysis and machine learning. As of the end of July 2016, there are over 240 third-party packages developed by independent Spark users available at the http://spark-packages.org/ website. A large majority of them allow further integration of Spark with other more or less common Big Data tools on the market. Please feel free to visit the page and check which tools or programming languages, that are known to you, are already supported by the packages indexed in the directory.

In Chapter 7, Faster than Hadoop: Spark and R and Chapter 8, Machine Learning for Big Data in R we will discuss the methods of utilizing Apache Spark in our Big Data analytics workflows using the R programming language. However before we do so, we need to familiarize ourselves with the most integral part of this book-the R language itself.