Skip to main content

BurpCSJ Tutorial - Using Crawljax

This is a simple tutorial to get you started with BurpCSJ and Crawljax.

Installation is easy - just download the BurpCSJ and import it in Burp via the extender tab, as shown below:

Extender -> Add -> Choose File



Once the extension is loaded, two new tabs will appear on the right side:



Start crawling

To start crawling, grab an URL item from any Burp tab (e.g. proxy history), right-click on the item and choose "Send to URL to Crawljax", as shown below:


After this, Crawljax session will start based on settings configured via the Crawljax tab.
It is always recommended to choose a web root URL item for Crawljax e.g. http://yoursite.xxx/ instead of a specific page or folder. This is typically the URL that you have configured under Target/Scope in Burp.

Crawling with a different browser

Under the Crawljax tab, it possible to configure the path to the browser drivers, proxy settings and other options for Crawljax.


If you need to use a different browser with Crawljax, then you would need to add the relevant drivers or executables:
In this example, let's use the Chrome driver:



Once chrome is selected, then you can start Crawljax with Chrome as described in the previous step.

Crawling application with login/authentication

If you are testing a web application with a login/authentication then it is recommended to use Burp cookie jar. This option allows BurpCSJ to pass cookies to Crawljax when crawling a site. If you already have session tokens in the cookie jar, then BurpCSJ will use those.



Exclusion list

The exclusion list allows to filter out unwanted pages, such as logout or signoff. More entries would be needed for complex applications, such as admnistrative interfaces where crawling might actually change or modify the application state.



Setting crawling for HTML elements

The last part allows more granular control on the HTML elements which would be considered by Crawljax. By enabling more HTML elements, it is possible to apply Crawljax logic against more elements. As a consequence, Crawljax session would probably take longer to complete.



Generating a report of crawling session

The CrawlOverview plugin can be invoked and a folder output needs to be set. At the end of the Crawljax session, the report will be generated under that folder.

An example of CrawOverview output can be seen here: http://crawls.crawljax.com/

Comments

Post a Comment

Popular posts from this blog

Pwning a thin client in less than two minutes

Have you ever encountered a zero client or a thin client? It looks something like this...

If yes, keep reading below, if not, then if you encounter one, you know what you can do if you read below...

The model above is a T520, produced by HP - this model and other similar models are typically employed to support a medium/large VDI (Virtual Desktop Infrastructure) enterprise.

These clients run a Linux-based HP ThinPro OS by default and I had a chance to play with image version T6X44017 in particular, which is fun to play with it, since you can get a root shell in a very short time without knowing any password...

Normally, HP ThinPro OS interface is configured in a kiosk mode, as the concept of a thin/zero client is based on using a thick client to connect to another resource. For this purpose, a standard user does not need to authenticate to the thin client per se and would just need to perform a connection - e.g. VMware Horizon View. The user will eventually authenticate through the c…

Microsoft .NET MVC ReDoS (Denial of Service) Vulnerability - CVE-2015-2526 (MS15-101)

Microsoft released a security bulletin (MS15-101) describing a .NET MVC Denial of Service vulnerability (CVE-2015-2526) that I reported back in April. This blog post analyses the vulnerability in details, starting from the theory and then providing a PoC exploit against a MVC web application developed with Visual Studio 2013.
For those of you who want to see the bug, you can directly skip to the last part of this post or watch the video directly... ;-)

A bit of theory

The .NET framework (4.5 tested version) uses backtracking regular expression matcher when performing a match against an expression. Backtracking is based on the NFA (non-deterministic finite automata) algorithm engine which is designed to validate all input states. By providing an “evil” regex expression – an expression for which the engine can be forced to calculate an exponential number of states - it is possible to force the engine to calculate an exponential number of states, leading to a condition defined such as “ca…

UXSS in McAfee Endpoint Security, www.mcafee.com and some extra goodies...

During the HITB2017AMS talk given in Amsterdam with @Steventseeley, I promised that I would have disclosed vulnerabilities affecting a security vendor product other than Trend Micro.

For those who have come to my blog for the first time and are looking at "insecurities" of security vendors, you might be interested as well on how we found 200+ remote code execution vulnerabilities in Trend Micro software...

But this blog post is dedicated to two McAfee products instead: McAfee Endpoint Security and SiteAdvisor Enterprise (now part of McAfee Endpoint Security). For simplicity, I will just refer to McAfee Endpoint Security for the rest of this post.

First let's demonstrate a particular type of XSS, a UXSS, considering that fact that it only affects the McAfee Endpoint Security plugin and does not depend on a particular web site or web application.

There are two different injection points:

-UXSS when user visits a red labelled web site - the payload is rendered in the BlockP…