Wednesday, 18 December 2013

Crashing Firefox with Regular Expression

Recently, I have found an interesting crash in Firefox and decided to investigate more. So I decided to Google for it and it appears that the issue is already known and was reported few months ago to Mozilla.
However, the bug is not fixed yet (at least in FF 26) and as a matter of personal exercise, I have decided to dig a little deeper and collect some notes which I am sharing in this blog post.
Here is a brief analysis of what I have found, thanks also to the pointers given from my friend Andrzej Dereszowski.

This is the crash PoC:


function main() {
regexp = /(?!Z)r{2147483647,}M\d/;


Below, a windbg screen shot showing the crash on Firefox 25 / Windows 8.1 (64bit):


At this stage, we can infer that an overflow occurred and as a measure of protection FF decided to crash instead of gracefully handle the issue. In my PoC, you can see already the integer 2147483647 which is used in a regular expression.

In the call stack, there are functions dealing with the RegExp just before the mozjs!WTF::CrashOnOverflow::overflowed: . Let's put a breakpoint on the previous function: mozjs!JSC::Yarr::YarrGenerator<1>::generatePatternCharacterFixed+0x87 and see what happens just before the overflow is identified.

This is the function where we are setting the breakpoint (bp) on:

void generatePatternCharacterFixed(size_t opIndex)
        YarrOp& op = m_ops[opIndex];
        PatternTerm* term = op.m_term;
        UChar ch = term->patternCharacter;

        const RegisterID character = regT0;
        const RegisterID countRegister = regT1;

        move(index, countRegister);
        sub32(Imm32(term->quantityCount.unsafeGet()), countRegister);

        Label loop(this);
        BaseIndex address(input, countRegister, m_charScale, (Checked<int>(term->inputPosition - m_checked + Checked<int64_t>(term->quantityCount)) * static_cast<int>(m_charSize == Char8 ? sizeof(char) : sizeof(UChar))).unsafeGet());

The bp is set on the BaseIndex address() part. This is where some checks are performed on our integer.

After stepping through different checks, our integer (2147483647) is stored in both lhs and rhs and then lhs and rhs are summed together. The sum is then stored in the "result" variable, as shown below:

The addition of lhs and rhs is 4294967294 (0xFFFFFFFE) which is stored in an int64. Following that, a further check is performed, as shown below:

 template <typename U> Checked(const Checked<U, OverflowHandler>& rhs)
        : OverflowHandler(rhs)
        if (!isInBounds<T>(rhs.m_value))
        m_value = static_cast<T>(rhs.m_value);
Within the isInBounds check (in the screen shot below), the minimum value is 0x80000000 and the maximum value is 0x7FFFFFFF, which means between -2147483648 and 2147483647, the range of a long.

The rhs.m_value is now 4294967294 (0xFFFFFFFE) as result of the previous arithmetic operation between lhs and rhs.

This triggers the check as 0xFFFFFFFE is greater than 0x7FFFFFFF (max value in the inBounds check). This would call overflowed() which would then simply crash FF.

Monday, 9 September 2013

BurpCSJ extension release

As part of my research and talk titled "Augmented Reality in your web proxy" presented during the HackPra AllStars program / OWASP AppSec EU 2013  security conference in Hamburg, I decided to release a new Burp Pro extension which integrates Crawljax, Selenium and JUnit.

I decided to take this approach to increase application spidering coverage (especially for Ajax web apps), speed up complex test-cases and take advantage of the Burp Extender API.

  • BurpCSJ extension JAR - download (all dependencies included)
  • BurpCSJ source code - github
  • "Augmented Reality in your web proxy" - presentation (slideshare)
Getting started
  1. Download BurpCSJ;
  2. Load BurpCSJ extension jar via the Extender tab;
  3. Choose the URL item from any Burp tab (e.g. target, proxy history, repeater); 
  4. Right click on the URL item;
  5. Choose menu item "Send URL to Crawljax";
  6. Crawljax will automatically start crawling the URL that you choose.



BurpCSJ extension in action:

BurpCSJ Tutorial - Using Crawljax

This is a simple tutorial to get you started with BurpCSJ and Crawljax.

Installation is easy - just download the BurpCSJ and import it in Burp via the extender tab, as shown below:

Extender -> Add -> Choose File

Once the extension is loaded, two new tabs will appear on the right side:

Start crawling

To start crawling, grab an URL item from any Burp tab (e.g. proxy history), right-click on the item and choose "Send to URL to Crawljax", as shown below:

After this, Crawljax session will start based on settings configured via the Crawljax tab.
It is always recommended to choose a web root URL item for Crawljax e.g. instead of a specific page or folder. This is typically the URL that you have configured under Target/Scope in Burp.

Crawling with a different browser

Under the Crawljax tab, it possible to configure the path to the browser drivers, proxy settings and other options for Crawljax.

If you need to use a different browser with Crawljax, then you would need to add the relevant drivers or executables:
In this example, let's use the Chrome driver:

Once chrome is selected, then you can start Crawljax with Chrome as described in the previous step.

Crawling application with login/authentication

If you are testing a web application with a login/authentication then it is recommended to use Burp cookie jar. This option allows BurpCSJ to pass cookies to Crawljax when crawling a site. If you already have session tokens in the cookie jar, then BurpCSJ will use those.

Exclusion list

The exclusion list allows to filter out unwanted pages, such as logout or signoff. More entries would be needed for complex applications, such as admnistrative interfaces where crawling might actually change or modify the application state.

Setting crawling for HTML elements

The last part allows more granular control on the HTML elements which would be considered by Crawljax. By enabling more HTML elements, it is possible to apply Crawljax logic against more elements. As a consequence, Crawljax session would probably take longer to complete.

Generating a report of crawling session

The CrawlOverview plugin can be invoked and a folder output needs to be set. At the end of the Crawljax session, the report will be generated under that folder.

An example of CrawOverview output can be seen here: