Book Image

Instant jsoup How-to

By : Pete Houston
Book Image

Instant jsoup How-to

By: Pete Houston

Overview of this book

As you might know, there are a lot of Java libraries that support parsing HTML content out there. Jsoup is yet another HTML parsing library, but it provides a lot of functionalities and boasts much more interesting features when compared to others. Give it a try, and you will see the difference! Instant jsoup How-to provides simple and detailed instructions on how to use the Jsoup library to manipulate HTML content to suit your needs. You will learn the basic aspects of data crawling, as well as the various concepts of Jsoup so you can make the best use of the library to achieve your goals. Instant jsoup How-to will help you learn step-by-step using real-world, practical problems. You will begin by learning several basic topics, such as getting input from a URL, a file, or a string, as well as making use of DOM navigation to search for data. You will then move on to some advanced topics like how to use the CSS selector and how to clean dirty HTML data. HTML data is not always safe, and because of that, you will learn how to sanitize the dirty documents to prevent further XSS attacks. Instant jsoup How-to is a book for every Java developer who wants to learn HTML manipulation quickly and effectively. This book includes the sample source code for you to refer to with a detailed explanation of every feature of the library.
Table of Contents (7 chapters)

Listing all URLs within an HTML page (Should know)


We are one step closer to data crawling techniques, and this recipe is going to give you an idea on how to parse all the URLs within an HTML document.

How to do it...

In this task, we are going to parse all links in, http://jsoup.org.

  1. Load the Document class structure from the page.

    Document doc = Jsoup.connect(URL_SOURCE).get();
  2. Select all the URLs in the page.

    Elements links = doc.select("a[href]");
  3. Output the results.

    for(Element url: links) {
    System.out.println(String.format("* [%s] : %s ", url.text(),  url.attr("abs:href")));
        }

The complete example source code for this section is available at \source\Section06.

How it works...

Up to this point, I think you're already familiar with CSS selector and know how to extract contents from a tag/node.

The sample code will select all <a> tags with an href attribute and print the output:

System.out.println(String.format("* [%s] : %s ", url.text(), url.attr("abs:href")));

If you simply print the attribute value like url.attr("href"), the output will print exactly like the HTML source, which means some links are relative and not all are absolute. The meaning of abs:href here is to give a resolution for the absolute URL.

There's more...

In HTML, the <a> tag is not the only one that contains a URL, there are other tags also, such as <img>, <script>, <iframe>, and so on. So how are we going to get their links?

If you pay attention to these tags, you can see that they have the same common attribute, src. So the task is quite simple: retrieve all tags containing the src attribute inside:

  Element results = doc.select("[src]");

Note

The following is a very good link listing from the Jsoup author:

http://jsoup.org/cookbook/extracting-data/example-list-links