Book Image

R Web Scraping Quick Start Guide

By : Olgun Aydin
Book Image

R Web Scraping Quick Start Guide

By: Olgun Aydin

Overview of this book

Web scraping is a technique to extract data from websites. It simulates the behavior of a website user to turn the website itself into a web service to retrieve or introduce new data. This book gives you all you need to get started with scraping web pages using R programming. You will learn about the rules of RegEx and Xpath, key components for scraping website data. We will show you web scraping techniques, methodologies, and frameworks. With this book's guidance, you will become comfortable with the tools to write and test RegEx and XPath rules. We will focus on examples of dynamic websites for scraping data and how to implement the techniques learned. You will learn how to collect URLs and then create XPath rules for your first web scraping script using rvest library. From the data you collect, you will be able to calculate the statistics and create R plots to visualize them. Finally, you will discover how to use Selenium drivers with R for more sophisticated scraping. You will create AWS instances and use R to connect a PostgreSQL database hosted on AWS. By the end of the book, you will be sufficiently confident to create end-to-end web scraping systems using R.
Table of Contents (7 chapters)

Summary

In this chapter, we have learned how to write scraping script using the RSelenium library. First, we worked on how to use selenium drivers on R and then we worked on how to use XPath rules with the RSelenium package. Afterward, we wrote our first web-scraping script using the RSelenium library to collect the user who commented to the specific post and who is mentioned in the comments to the specific post. Then we tried to send a click event and. Finally, we used Regex rules to extract the information that we were interested in.

In the next chapter, we will talk about the fundamentals of cron jobs and databases. At the end of the next chapter, we will be writing our creating cron job to schedule our web-scraping task and learn how to store collected data on databases.