Monday, 9 April 2012

Secure coding (and Application Security) must be invisible to developers

At OWASP a while back we come up with the idea that '...Our [OWASP] mission is to make application security visible...' and for a while I used to believe in the idea that if only everybody had full visibility into 'Application Security' then we would solve the problem.

But after a while I started to realize that what we need to create for developers, is for 'Application Security' / 'Secure Coding' to be INVISIBLE 99% of the time. It is only the decision makers (namely the buyers) that need visibility into an application secure state

We will never get secure applications at a large scale if we require ALL developers (or even most) to be experts at security domains like Crypo, Authentication, Authorization, Input validation/sanitation, etc...


Note that I didn't say that NOBODY should be responsible for an Application's security. Of course that there needs to be a small subset of the players involved that really cares and understands the security implications of what is being created (we can call these the security champions).

The core idea is that developers should be using Frameworks, APIs and Languages that allow them to create secure applications by design (where security is there but is invisible to developers).

And when they (the developers or architects) create a security vulnerability, at that moment (and only then), they should have visibility into what they created (i.e. the side effects) and be shown alternative ways to do the same thing in a secure way.

This is how we can scale, which is why it is critical that OWASP (and anybody who cares about solving the application security problem) needs to focus in improving our Framework's ability to create secure apps.

One key problem that we still have today (April 2012)  which is preventing the mass 'invisibilitycation of security'  at Framework level, is that we are still missing Security-focused  SAST/Static-Analysis rules

How we fixed Buffer Overflows

A very good and successfully example of making security 'invisible' for developers was the removal of 'buffer overflows' from C/C++ to .Net/Java (i.e. from unmanaged to managed code).

Do .NET/Java developers care about overflowing their buffers when handing strings? No, since that is handled by the Framework :)

THAT is how we make security (in this case Buffer Overflow protection) Invisible to developers

The Cooking Analogy

If you are looking for an analogy, "a chef cooking food" is probably the better one.

Think of software developers that are cooking with a number of ingredients (i.e. APIs).

Do you really expect that chef to be an expert on how ALL those ingredients (and tools he is using) were created and behave?

It is impossible, the chef is focused on creating a meal!!!

Fortunately the chef can be confident that some/all of his ingredients+tools will behave in a consistent and well documented way (which is something we don't have in the software world).

I like the food analogy because, as with software, one bad ingredient is all it takes to ruin it.


Related Posts:

5 comments:

Darren said...

While I agree that trying to make developers into security experts is a fool's errand, making AppSec "invisible" in this way isn't the right answer either.

We don't expect developers to be experts on any other aspects of quality either; yet we don't tolerate the sort of ignorance that "invisible quality" would require. Developers are expected, for example, to have some understanding of performance engineering, even though we don't expect them to be performance engineers.

To extend your chef analogy, chefs aren't expected to be an expert with every ingredient, but they are expected to have a working knowledge of them; and, more importantly, a continuous awareness of food safety.

Joshbw said...

Darren is somewhat right in that there is a baseline amount of information a dev should know as the general part of engineering, but where I disagree is when he says that invisible security isn't the right answer either.

The truth is that there isn't one right answer, but rather a necessary mix of answers. Some problems are not going to be realistically fixed in the frameworks because they are too context aware (should this resource be protected by an auth check?) or because the "fix" breaks too much stuff.

On top of that the frameworks are being developed with the same mindset of all of the other products out there - "What makes the customer happy first, and then maybe if security doesn't interfere with that". A great example is that MVCs really should employ declaritive binding rather than auto binding; it really is a marginal hit to development and ensures that the only fields that can be set are those explicitely exposed by the dev. Despite this problem being known for years even MS has taken the stance that devs should opt into declaritive binding despite the fact that MVCs are default allow. MVC problems pale in comparison to W3C standards woes where each iteration we perpetuate the same broken issues while adding new ones.

I agree that the underlying platforms that devs use should be secure, but we have a lot of work to get there. Even should we get there that doesn't mean that they can't be used insecurely even if they are secure by default - C# does have UNSAFE after all - and that's where we will still need education and awareness. Modern cars are far safer than cars even from 10 years ago, but drivers still need to be aware of when they are using them recklessly.

Dinis Cruz said...

@Darren , I think you are underestimating how much we currently required developers to know (and understand) in order to 'secure code'. There are just too many moving parts today, and in most cases we need to 'hide' them inside the Frameworks.

Note that when I mean 'invisible' I mean that under normal coding practices they should not be visible.

@Joshbw, the only way that I see that we will be able to deal with those 'real world details' is to have SAST rules for those frameworks (see We need Security-focused SAST/Static-Analysis rules ). That is how we can deal with those Auth checks you talk about.

Also I really like your comment on what happened with the .NET MVC Stuff, that is a great case study of how 'vulnerable by design' Frameworks are created

On the C# UNSAFE tag, do you know that that is a compiler trick? Basically all compiled IL runs in an UNSAFE environment (it is up to the compiler to make sure they emit valid IL when the UNSAFE tag is not used).

and UNSAFE is another good example of why we need SAST rules for Frameworks. We can say to a developer "...Do NOT use UNSAFE..."

what we have to say is: "...When you use UNSAFE, here are the rules of the game..."

or even better: "...Use UNSAFE, and we'll let you know if you create a vulnerability..."

Dinis Cruz said...

Just posted this follow up entry: Why ASP.NET MVC is 'insecure by design' , just like Spring MVC (and why SAST can help)

Darren said...

@Dinis I'm not disagreeing that where we're at isn't right; all I'm saying is that completely hiding security from devs on the premise that they shouldn't need to know anything about it is swinging that pendulum too far in the other direction.

@Joshbw you're right on, IMO. Frameworks do need to be secure, because devs should be able to focus on creating software rather than securing frameworks. But frameworks are created by developers too. If all developers have enough awareness and knowledge to make their components secure, and to ask for help from security pros when they need it, I think we're hitting the right balance.

Unfortunately, there's a long way to go on both counts.