Skip to main content

BurpCSJ - Dealing with authentication

I have received many questions on how to properly handle authentication when using BurpCSJ, so here is a short tutorial on how to properly manage authentication. If you are looking for how to use this Burp extension, here is a basic tutorial as well.

In this post, we are going to use BurpCSJ against the Altoro bank (vulnerable web application made on purpose), which is available online here: http://demo.testfire.net/

First, start clean (the reasons will be clear at the end of this tutorial):

- Start Burp;
- Start browser and configure proxy settings to work with Burp;
- Browse to target site: http://demo.testfire.net/
- Perform login: user: jsmith - password: Demo1234
- Check Burp cookie jar (under options/sessions), this should be populated with some cookies:


- Configure BurpCSJ (Crawljax tab) and make sure that "Use Manual Proxy" is ticked and it is pointing to Burp and that the "Use cookie jar" option is ticked as well:



- Start/Launch BurpCSJ against target site (right-click, Send URL to crawljax option). When BurpCSJ launches Crawljax, you will notice that the first request has no "cookie" - this is normal in WebDriver and the reason why this occurs is that WebDriver needs to first initialize, so no worries.



- the second request, or third request (depending if there is a redirection) and all the subsequent requests performed by Crawljax will include the valid cookies from the cookie jar.

You are now performing an authenticated crawling session and if you check the browser managed by WebDriver, you should notice that it is using a valid authenticated session.

In case you do not follow the first two steps, you might end up having some issues and failing to run a proper authenticated crawling session. This happened to me quite few times...

Let's say that you already started the browser, logged in and then you enable proxy with Burp and then you run BurpCSJ. The issue is that Burp does not have history of the Set-Cookie directive so it will identify the cookies sent by the browser and will populate the Cookie jar by taking as a reference the parent domain only.

Below, you can see the issue by comparing the cookies in the browser and the ones in the Burp cookie jar. Can you spot the difference? ;-)

If this happens, a BurpCSJ crawling against demo.testfire.net would not use the cookies in the Burp cookie jar, as demo.testfire.net doesn't match with testfire.net. So no authenticated crawling session in this case...

So don't be lazy, if you have to restart/clean the browser time to time... ;-)

The latest Crawljax package has fixed multiple issues. I have noticed the crawler is more diligent and sticks to the target domain instead of visiting other pages from out-of-scope domains.

As usual, feedback is more than welcome and feel to contact me or raise github issues - https://github.com/malerisch/burpcsj

Comments

  1. How would one use BurpCSJ handle an application which prompts the user for an id and password but does not create a session id cookie? Instead it sets an internal JavaScript variable and passes that with every request - as a hidden POST variable in most cases.

    ReplyDelete
  2. BurpCSJ through proxy and then in Proxy options, you need to have a Match/Replace entry to have Burp automatically add the hidden POST variable to each HTTP request. See here: http://portswigger.net/Burp/help/proxy_options.html . Hope it helps.

    ReplyDelete

Post a Comment

Popular posts from this blog

Pwning a thin client in less than two minutes

Have you ever encountered a zero client or a thin client? It looks something like this...

If yes, keep reading below, if not, then if you encounter one, you know what you can do if you read below...

The model above is a T520, produced by HP - this model and other similar models are typically employed to support a medium/large VDI (Virtual Desktop Infrastructure) enterprise.

These clients run a Linux-based HP ThinPro OS by default and I had a chance to play with image version T6X44017 in particular, which is fun to play with it, since you can get a root shell in a very short time without knowing any password...

Normally, HP ThinPro OS interface is configured in a kiosk mode, as the concept of a thin/zero client is based on using a thick client to connect to another resource. For this purpose, a standard user does not need to authenticate to the thin client per se and would just need to perform a connection - e.g. VMware Horizon View. The user will eventually authenticate through the c…

UXSS in McAfee Endpoint Security, www.mcafee.com and some extra goodies...

During the HITB2017AMS talk given in Amsterdam with @Steventseeley, I promised that I would have disclosed vulnerabilities affecting a security vendor product other than Trend Micro.

For those who have come to my blog for the first time and are looking at "insecurities" of security vendors, you might be interested as well on how we found 200+ remote code execution vulnerabilities in Trend Micro software...

But this blog post is dedicated to two McAfee products instead: McAfee Endpoint Security and SiteAdvisor Enterprise (now part of McAfee Endpoint Security). For simplicity, I will just refer to McAfee Endpoint Security for the rest of this post.

First let's demonstrate a particular type of XSS, a UXSS, considering that fact that it only affects the McAfee Endpoint Security plugin and does not depend on a particular web site or web application.

There are two different injection points:

-UXSS when user visits a red labelled web site - the payload is rendered in the BlockP…

Microsoft .NET MVC ReDoS (Denial of Service) Vulnerability - CVE-2015-2526 (MS15-101)

Microsoft released a security bulletin (MS15-101) describing a .NET MVC Denial of Service vulnerability (CVE-2015-2526) that I reported back in April. This blog post analyses the vulnerability in details, starting from the theory and then providing a PoC exploit against a MVC web application developed with Visual Studio 2013.
For those of you who want to see the bug, you can directly skip to the last part of this post or watch the video directly... ;-)

A bit of theory

The .NET framework (4.5 tested version) uses backtracking regular expression matcher when performing a match against an expression. Backtracking is based on the NFA (non-deterministic finite automata) algorithm engine which is designed to validate all input states. By providing an “evil” regex expression – an expression for which the engine can be forced to calculate an exponential number of states - it is possible to force the engine to calculate an exponential number of states, leading to a condition defined such as “ca…