Book Image

Learning Python Web Penetration Testing

By : Christian Martorella
Book Image

Learning Python Web Penetration Testing

By: Christian Martorella

Overview of this book

Web penetration testing is the use of tools and code to attack a website or web app in order to assess its vulnerability to external threats. While there are an increasing number of sophisticated, ready-made tools to scan systems for vulnerabilities, the use of Python allows you to write system-specific scripts, or alter and extend existing testing tools to find, exploit, and record as many security weaknesses as possible. Learning Python Web Penetration Testing will walk you through the web application penetration testing methodology, showing you how to write your own tools with Python for each activity throughout the process. The book begins by emphasizing the importance of knowing how to write your own tools with Python for web application penetration testing. You will then learn to interact with a web application using Python, understand the anatomy of an HTTP request, URL, headers and message body, and later create a script to perform a request, and interpret the response and its headers. As you make your way through the book, you will write a web crawler using Python and the Scrappy library. The book will also help you to develop a tool to perform brute force attacks in different parts of the web application. You will then discover more on detecting and exploiting SQL injection vulnerabilities. By the end of this book, you will have successfully created an HTTP proxy based on the mitmproxy tool.
Table of Contents (9 chapters)

What is resource discovery?

In this section, we're going to learn what resource discovery is and why it is important when testing web applications. Also, we're going to introduce FUZZDB, which is going to be used in the next section as our dictionary database.

You will remember that, in Chapter 1, Introduction to Web Application Penetration Testing, we learned about the penetration testing process. The second phase in the process was mapping. In the mapping phase, we need to build a map or catalog of the application pages and functionalities. In earlier sections, we learned how to perform application mapping using a crawler. We also learned that crawlers have some limitations. For example, links generated by JS are not identified by crawlers. This can be overcome by using HTTP proxies or by using a headless browser such as PhantomJS. If we do that, we should be able...