In this chapter, we walked through a variety of ways to scrape data from a web page. Regular expressions can be useful for a one-off scrape or to avoid the overhead of parsing the entire web page, and
BeautifulSoup provides a high-level interface while avoiding any difficult dependencies. However, in general,
lxml will be the best choice because of its speed and extensive functionality, so we will use it in future examples.
In the next chapter we will introduce caching, which allows us to save web pages so that they only need be downloaded the first time a crawler is run.