Another problem to be studied throughout this book is the implementation of a parallel Web crawler. A Web crawler consists of a computer program that browses the Web to search for information on pages. The scenario to be analyzed is a problem in which a sequential Web crawler is fed by a variable number of Uniform Resource Locators (URLs), and it has to search all the links within each URL provided. Imagining that the number of input URLs may be relatively large, we could plan a solution looking for parallelism in the following way:
Group all the input URLs in a data structure.
Associate data URLs with tasks that will execute the crawling by obtaining information from each URL.
Dispatch the tasks for execution in parallel workers.
The result from the previous stage must be passed to the next stage, which will improve raw collected data, thereby saving them and relating them to the original URLs.
As we can observe in the numbered steps for a proposed solution, there is a combination...