Cатсн²² (in)sесuяitу / ChrisJohnRiley

Because we're damned if we do, and we're damned if we don't!

Tag Archives: HTTP

Defense by Numbers: Making problems for script kiddies and scanner monkies

Since early 2012 I’ve been working on a simple theory…

The Theory:

By varying [response|status] codes, it should be possible to slow down attackers and automated scanners.

If you’ve met me at a conference any time in the last year I’ve probably talked about it at length and bored the hell out of you (sorry about that BTW).

After researching a number of aspects of this theory I put forward a presentation for BSidesLondon to talk about my findings and how it might be applied to application defense.

The topic can be a little complex due to the various ways browsers handle [response|status] codes. Even within a specific browser the handling of different content types varies. JavaScript is a prime example of that. Where as a browser will happily show you a webpage received with a 404 “Not Found” code, the same browser may not accept active script content with the same code.

During testing I also discovered a couple of interesting issues with Proxy servers that could be used by attackers to expose credentials… as well as some very interesting browser quirks that are probably only interesting to a handful of people. Still, I like edge-case stuff, it’s weird and that suits me just right 😉

BSidesLondon Abstract

On the surface most common browsers (user agents) all look the same, function the same, and deliver web content to the user in a relatively uniformed fashion. Under the surface however, the way specific user agents handle traffic varies in a number of interesting ways. This variation allows for intelligent and skilled defenders to play with attackers and scripted attacks in a way that most normal users will never even see.
This talk will attempt to show that differences in how user agents handle web server responses can be used to improve the defensive posture of a website. Further examples will be given that show specially crafted responses can disrupt common automated attack methods and cause issues for casual attackers and wide scale scanning of websites

If the topic is something that interests you (and I’m sure there’s a lot more research to be done here) feel free to take a snoop at the slides… The talk was recorded also, so keep an eye on the BSidesLondon website and twitter feed for information on the video/audio release.

 

 

Links:

  • Some thoughts on HTTP response codes –> HERE
  • Privoxy Proxy Aauthentication Credential Exposure [cve-2013-2503] –> HERE
  • mitm-proxy scripts used in testing –> HERE

Cookies… the next generation

I like cookies…. which probably explains why none of my T-Shirts fit properly anymore. Still, HTTP cookies are good too, even if they don’t taste as good.

Michal Zalewski wrote a great piece discussing the ins/outs and general issues of cookies over on his blog (HTTP cookies, or how not to design protocols). In his blog he raised a number of issues and questions, and rightfully so. The cookie standard (depending on which RFC you go by) is seriously outdated and has been left behind by the constant march of progress.

Some of the issues he raises are already being examined and proposed solutions exist (at least in part) in the latest IETF httpstate cookie draft (at version 17 at time of writing).  I wrote a little about the IETF HTTP State Management Mechanism working group back in June. Trying to make sense of the mess that’s already been made is a difficult enough task without taking into consideration future uses (HTML 5 anybody!). Still they seem to be making headway in some areas.

In particular section 5.3 Storage Model includes (5) a section on rejecting public suffixes to, for example, prevent example.com.pl from setting a cookie for *.com.pl (an issue Michal talks about in his blog post). Having a blacklist might not be the most elegant solution here (after all the enforcement of this lies with the end users browser and how up-to-date their suffixes list is), however it’s a step into the right direction.

5.3 (10) also starts to make some inroads into the problem of non-HTTP sources (APIs) setting cookies with the HTTPonly flag set. It seems a little silly to set a cookie when you can’t read it… but nobody ever said the old RFCs made sense now did they! I can’t seem to see the same with the Secure flag, but I can also see possible scenarios where that could be acceptable to set.

I won’t bore you all by going on further, especially as I’m no cookie expert. If you’re interested in cookie (either the edible, or the HTTP kind) then checkout Michal’s blogpost on the subject and take a look at the IETF draft (it’s not as bad as you’d thing for an RFC draft).

I would also suggest interested parties to comment on the current draft if you see anything missing, or something that doesn’t solve an existing issue. After all, it’s only now that we can make our voices heard!

Links