Book Image

Go Web Scraping Quick Start Guide

By : Vincent Smith
Book Image

Go Web Scraping Quick Start Guide

By: Vincent Smith

Overview of this book

Web scraping is the process of extracting information from the web using various tools that perform scraping and crawling. Go is emerging as the language of choice for scraping using a variety of libraries. This book will quickly explain to you, how to scrape data data from various websites using Go libraries such as Colly and Goquery. The book starts with an introduction to the use cases of building a web scraper and the main features of the Go programming language, along with setting up a Go environment. It then moves on to HTTP requests and responses and talks about how Go handles them. You will also learn about a number of basic web scraping etiquettes. You will be taught how to navigate through a website, using a breadth-first and then a depth-first search, as well as find and follow links. You will get to know about the ways to track history in order to avoid loops and to protect your web scraper using proxies. Finally the book will cover the Go concurrency model, and how to run scrapers in parallel, along with large-scale distributed web scraping.
Table of Contents (10 chapters)

How to throttle your scraper

Part of good web scraping etiquette is making sure you are not putting too much load on your target web server. This means limiting the number of requests you make within a certain period of time. For smaller servers, this is especially true, as they have a much more limited pool of resources. As a good rule of thumb, you should only access the same web page as often as you think it will change. For example, if you were looking at daily deals, you would probably only need to scrape once per day. As for scraping multiple pages from the same website, you should first follow the Crawl-Delay in a robots.txt file. If there is no Crawl-Delay specified, then you should manually delay your requests by one second after every page.

There are many different ways to incorporate delays into your crawler, from manually putting your program to sleep to using external...