It’s been a while since I’ve had the chance to sit down and knock out a post for the blog. I’d love to say that I’ve been working on a project that you’ll soon all get to see, but that would be a lie. Things have just been crazy at work. 12 hour days working out of a hotel (without Internet access most of the time). Still, that’s almost over and done with. Until I can get the time to knock up a couple of technical items (it’s been far too long), here are some thoughts on my recent conversation about defense in depth.
More than a couple of times in the last month I’ve had to take time to explain the concept of defense in depth. It sounds obvious to us in security that one layer of security to cover everything is a bad plan. After all, if that one layer of security has a crack in it, it’s game over. How many websites thought that sending clear text passwords (or base64 encoded passwords) was secure enough because they used SSL for the connections. How many of them were using Debian servers, and how many customers could have been exposed. It’s not often that a security issue the size of the Debian OpenSSL or DNS Cache poisoning come about. However, they do come about. I’d hope that people would have learnt their lessons by now about things like this. But they haven’t. Companies will continue to ignore the issues until it bites them, and then it’ll be all hands on deck to fix the issues. Some of the biggest, and most technical companies in the world suffer from simple failures like XSS and CSRF even though their websites have been penetration tested. In the ever changing world of web-applications, you can’t rely on having a penetration test secure you from attackers 100%. A week after your test is completed, you’ve probably added a few extra pages, maybe an additional form or two. Who’s tested these new features. Security is something that only works if you build it up one piece at a time.
Teaching your programmers that whitelists are better than blacklists may stop the next XSS vulnerability. Why do you need to allow 80 characters into a field designed to hold a date of birth. Come to think of it, why do you even need to allow a user to type into that field, implement a drop down (but do it properly). Teach them that client-side javascript is great for quick validation, but server-side is a must for security. All user input should be classed as hostile.
Teach your support staff about security. Emails can be spoofed, and requests for private information need to be based on policy, and not the whim of a tech support monkey. I know how it goes, as I’ve done the technical support thing myself for a long time. If you time your request just right (about 2 minutes before shift change) then you can get almost anything you need as long as you get off the line quickly.
Teach your Network Admins that IDS and IPS systems are fine and great. It’s nice to know when a user tries to send an exploit to your box. However they need to know that an IDS or IPS is very unlikely to stop a CSRF attack. After all, it’s a legitimate user doing a legitimate action (although they’re unaware of it). It’s also very unlikely to stop an attack if it comes from within your network (client side exploitation, or user gone bad) if you have the IDS/IPS in the wrong place. Know the limitation of your security, and fix the holes. There is no 100% cover, no magic bullet. You need to test the security as much and as often as you can… and no, sadly Nessus scans do not mean you’re secure.
I wrote a while ago about security through obscurity. The points I raised there still hold true. Take ever little benefit you can, and use it. If you use Apache then take some time to load up mod_rewrite or mod_security. Filter out the known bad Agent-strings, even though you can easily spoof them. If your 5 minutes of rule writing save you from 1 attack in the next year, it’s worthwhile. Take some time to put IP restrictions on your admin console. If you have to put it under the /admin directory on your web-server (and why, oh why do you HAVE to) then at least stop 99.999% of the web from being able to see it, crawl it, and possible brute-force it.
One last note about penetration testing, and defense in depth. I’ve seen a lot of tests were the scope is too limited to mean anything. Testing through the IPS and firewall will make your internal security team feel great when the report comes back with 2 ports open (80, and 443) and no way to test the infrastructure behind the systems. If you want to know what state the servers are really in, then you need to test from within the network as well. If an attacker manages to get a local LAN port (meeting room etc..) what can he access. If he exploits a vulnerability in your test environment, what can he access. If he exploits a laptop while your sales manager is surfing at the local Starbucks, what can he access.
There’s no point (well almost no point) in testing something you know to be secure and ignoring the weak points. That’s just fooling yourself into believing that you’re secure, when it’s only a matter of time.
Like this:
Like Loading...
Related