Jedi mind tricks for building application security programs
David Rook and Chris Wysopal
Most developers actually want to write secure code
You need to take ownership of the app sec problem and not just hand them a list of bugs. Just telling them where things are bad doesn’t help them understand the code that needs to be written to make things better. If security requirements aren’t in the spec, then how can you expect them to design and code secure code? Things need to be in place to let them write secure code.
Developers generally like producing quality code and want security knowledge. Use this! If you define quality as secure, then it’s a lot easier to get the code you want and need.
Following something like the Microsoft SDLC is great, but only if it meets your companies needs and requirements!
So what do we do to help developers?
They are the ones that will make the security program a success after all!
Help them understand how to write secure code. Obvious really, but often overlooked.
Own the security problem with them. Don’t be the auditor that just says when things are wrong. Don’t dictate! Speak, listen, learn and improve things. The “I’m the security person, I’m right” attitude isn’t helpful here!
Help developers catch faults before it gets to the security testing phase. Clear security test cases for developers empowers them to make things better themselves. Buying licenses for Burp Suite for your development team isn’t a waste! Empower them to make things better!
We speak an Alien Language
We talk about injections, jacking, pwning… if they don’t understand the issues, and we don’t speak the right language, then how can you get the support and budget you need. If your findings aren’t readable, they don’t help anybody.
How do you get management and C-level to sign off on secure code! How do you get a project manager to understand they need to hold things back a week to make it secure!
We present findings in weird formats using weird measurements.
Using things like CVSS is all well and good, but doesn’t translate to business risk well.
CVSS scores can make things look a lot less of an issue than they really are. Depends on the temporal and environmental scores. Does it make sense to a tester? a manager? a developer?
We feel security should just happen without having to justify it.
We need to speak the business language
Why should they care about injection X! Talk the business langauge and convey things in ways they can equate things to real business impact.
We need to format things in a format that makes sense.
Use examples that show brand issues after hacks… RSA springs to mind. If you Google RSA security, the first page is nothing but news reports on the hack. THIS is business impact.
How does your business score risk?
Scoring security rankings on a different way means they lose business impact, and can be ignored easier
We’re not saying dump CVSS, but also convey the risk in the way business can transfer it to business risk easily. Whatever they use, no matter how it looks.
Application security is NO exception when it comes to resourcing.
Apply the KISS principle
Keep everything as simple as possible
Understand what developers want and need to write secure code. It’s not just a yearly refresher on what the new OWASP top ten is!
Work with the business to understand what they care about.
Why do we need executive buy-in?
We need tools, training, services… and it’s going to affect the business!
If a secure developement program needs to be implemented, it can’t be an opt-in. Without business buy-in, things are hard to enforce. Some departments won’t be interested.
Business buy-in is required.
C-level thinks in $$$
How can security reduce risk, lower costs, and help grow the top line?
How can you transfer security issues into numbers that make sense to the C-Level?
- Legal Risk –> Legals costs, settlement, fines
- Compliance Risk –> fines, lost business
- Brand Risk –> Lost business
- Security Risk –> ???
If security risk is the only thing not quantified, how can you expect buy-in. How can we do what these other professions already do?
Translate technical rick into monetary risk
What is the monetary risk from vulnerabilities in your application portfolio
Use breach costs to estimate the costs. There’s not enough data out their yet about breach costs. Unless your company has experienced a breach, then you have to use external sources.
Ponemon Institute created a per record view of breaches (April 2010) that shows some information… if your company is worried about exposure of PII
Financial breach costs $248 per record… how does this relate to your company!
Using threat space data is also useful for showing where the threats come from (at least for those reported).
40% of breaches are due to hacking (from 2009 DBIR)… breakdowns of root cause allow us to really measure real word use of vulnerabilities. However, the data is mostly PCI based, and might not be useful for your industry vertical.
There’s not really enough data out their if you don’t fit the model (mostly financial PII exposure).
Using all this data you can come up with the likelyhood of a specific vulnerability being used to exploit your company. It’s not the final data needed, but it gives a starting point for discussion and allows you to gauge if your company is better or worse than the norm within the sector.
Understanding what the real world $$$ cost is of these vulnerabilities on a company-wide level. Developers care about the single app, but C-Level care about everything within an organization. This includes code not from in-house sources. Your application developer program needs to incorporate this.
Your program needs to actually be achievable.
Tips to make the program successful
- The right people have to understand what is going to happen before you start
- Do a real world penetration test or assessment of a project. Demonstrate risk!
- Integrate into processes (SDLC, Procurement/legal), M&A)
- Make things relevant to the developers. Don’t show them example code and fixes in JAVA if they code in PHP