Book Image

R Web Scraping Quick Start Guide

By : Olgun Aydin
Book Image

R Web Scraping Quick Start Guide

By: Olgun Aydin

Overview of this book

Web scraping is a technique to extract data from websites. It simulates the behavior of a website user to turn the website itself into a web service to retrieve or introduce new data. This book gives you all you need to get started with scraping web pages using R programming. You will learn about the rules of RegEx and Xpath, key components for scraping website data. We will show you web scraping techniques, methodologies, and frameworks. With this book's guidance, you will become comfortable with the tools to write and test RegEx and XPath rules. We will focus on examples of dynamic websites for scraping data and how to implement the techniques learned. You will learn how to collect URLs and then create XPath rules for your first web scraping script using rvest library. From the data you collect, you will be able to calculate the statistics and create R plots to visualize them. Finally, you will discover how to use Selenium drivers with R for more sophisticated scraping. You will create AWS instances and use R to connect a PostgreSQL database hosted on AWS. By the end of the book, you will be sufficiently confident to create end-to-end web scraping systems using R.
Table of Contents (7 chapters)

Data extraction systems

A web data extraction system can be defined as a platform that implements a set of procedures that take information from web sources. In most cases, the average end users of Web Data Extraction systems are companies or data analysts looking for web-related information.

An intermediate user category often consists of non-specialized individuals who need to collect some web content, often non-regularly. This user category is often inexperienced and is looking for simple yet powerful Web Data Extraction software packages. DEiXTo is one of them. DEiXTo is based on the W3C Document Object Model and allows users to easily create inference rules that point to a portion of the data for digging from a website.

In practice, it covers a wide range of programming techniques and technologies such as web scraping, data analysis, natural language parsing, and information security. Web browsers are useful for executing JavaScript, viewing images, and organizing objects in a more human-readable format, but web scrapers are great for quickly collecting and processing large amounts of data. They can display a database of thousands, or even millions, of pages at a time (Mitchell 2015).

In addition, web scrapers can go places that traditional search engines cannot reach. By searching Google for cheap flights to Turkey, a large number of flights pop up, including advertising and other popular search sites. Google simply does not know what these websites actually say on their content pages; this is the exact consequence of having various queries entered into a flight search application. However, a well-developed web scraper will know the prices that vary over time of a flight to Turkey on various websites and can tell you the best time to purchase your ticket.