Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering Parallel Programming with R
  • Table Of Contents Toc
Mastering Parallel Programming with R

Mastering Parallel Programming with R

By : Simon R. Chapple, Sloan, Forster, Troup
5 (4)
close
close
Mastering Parallel Programming with R

Mastering Parallel Programming with R

5 (4)
By: Simon R. Chapple, Sloan, Forster, Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (8 chapters)
close
close

Grid parallelism

Grid parallelism is naturally aligned to image processing, where operations can be cast in a form that acts on a specific localized region for each and every individual cell value of data. Commonly, the cell value is referred to as a pixel in the case of 2D image data, and voxel in the case of 3D image data. Grids can, of course, be N-dimensional matrix structures, but as human beings, it's somewhat difficult for us to wrap our heads around more than 4D.

The key to efficient grid parallelism is the distribution mapping of data across the set of parallel processes, and the interactions between each process, as they may exchange data with one another to accommodate iterative operations that require access to more of the data than each process holds locally. Consider a simple but very large square 2D image, and that we have a cluster of nine independent computational cores available. To illustrate the point, we will add the constraint that each of the computational nodes...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Mastering Parallel Programming with R
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon