Book Image

Big Data Analytics with R

By : Simon Walkowiak
Book Image

Big Data Analytics with R

By: Simon Walkowiak

Overview of this book

Big Data analytics is the process of examining large and complex data sets that often exceed the computational capabilities. R is a leading programming language of data science, consisting of powerful functions to tackle all problems related to Big Data processing. The book will begin with a brief introduction to the Big Data world and its current industry standards. With introduction to the R language and presenting its development, structure, applications in real world, and its shortcomings. Book will progress towards revision of major R functions for data management and transformations. Readers will be introduce to Cloud based Big Data solutions (e.g. Amazon EC2 instances and Amazon RDS, Microsoft Azure and its HDInsight clusters) and also provide guidance on R connectivity with relational and non-relational databases such as MongoDB and HBase etc. It will further expand to include Big Data tools such as Apache Hadoop ecosystem, HDFS and MapReduce frameworks. Also other R compatible tools such as Apache Spark, its machine learning library Spark MLlib, as well as H2O.
Table of Contents (16 chapters)
Big Data Analytics with R
About the Author
About the Reviewers

To the memory limits and beyond

We will start off by introducing you to three very useful and versatile packages which facilitate out-of-memory data processing: ff, ffbase, and ffbase2.

Data transformations and aggregations with the ff and ffbase packages

Although the ff package authored by Adler, Glaser, Nenadic, Ochlschlagel, and Zucchini, is several years old it still proves to be a popular solution to large data processing with R. The title of the package Memory-efficient storage of large data on disk and fast access functions roughly explains what it does. It chunks the dataset, and stores it on a hard drive, while the ff data structure (or ffdf data frame), which is held in RAM, like the other R data structures, provides mapping to the partitioned dataset. The chunks of raw data are simply binary flat files in native encoding, whereas the ff objects keep the metadata, which describe and link to the created binary files. Creating ff structures and binary files from the raw data does not...