Friday, 28 August 2009

Why I want to live in an insecure world?

I want to live in an insecure world because an insecure world is a safer world

I.e.the more insecure we are, the safer we are!

The fact that something bad/unexpected CAN happen (what we usually call a vulnerability), doesn't automatically mean that it WILL happen or be exploited (this is something that the business guys understand very well, but usually the security guys don’t).

In fact, usually, the more insecure the system is, the easier it is to do business with it , and the easier it is for its users to interact with.

This is something that the Credit Card industry understands really well, in fact one criticisms that the great geekonomics book (for example see chapter We'll Be Compliant, Later) throws at it, is the fact that they (the Credit Card industry) design insecure systems by design, mainly because they want it to be easy to use. They accept a certain percentage of fraud, which basically means that they are trading security for functionality and ease-of-use (i.e. they are making a risk decision to build insecure systems (or systems not as secure as they could be)).

PCI, from a risk-reduction point of view, is a MASSIVE success. From a making-the-websites-more-secure-and-not-exploitable point of view, PIC is probably not as successful/effective as the PCI-bashing crowd would like it to be.

Coming back to the main idea of this post, the reason I want to live in an insecure world, is because I want to live in a world where I leave my doors unlocked!

I (and my family) am much more secure in a neighborhood where I don’t have to lock my door (and I don’t have to physically protect my assets), versus a neighborhood where I need private security guards with machine guns outside my door to protect myself.

I.e. I want to live in a world where I don’t have to care about security! because that is actually a safer world.

Of course that in the real-world and in normal neighborhoods, one can't leave the doors unlocked all the time. What we do is we adjust our security measures to the current perceived threats (and the probability that the vulnerability (the door not locked) is going to be exploited).

It is ultimately a risk decision (sometimes our 'perceived' threats are grossly over or under estimates (see Bruce for tons on examples on this) but that tends to adjust it self with time) .

I don't have a problem with clients making risk decisions (for example 'choosing NOT to fix a security vulnerability or applying a short-term remediation using a WAF). As long as it doesn't affect me personally, it is their decision to make. After all, it is their businesses.

I DO HAVE a problem when the clients DON'T know about how insecure they might be and how many vulnerabilities exist in their applications/systems. My view of my job (and of OWASP's) is to give clients and users VISIBILITY into the security implications of what they are building, buying or using. What to do with that visibility and knowledge is NOT my job :)

Ultimately, when the market has good understanding of what is going on, it tends to make good decisions. The problems tend to occur when the market DOESN'T understand what is going on (like the recent financial crash as shown us)

Why We Need Breakers ( ... and virus writters ... )

One of the things that OWASP's Jeff Williams usually likes to say is that we need builders versus breakers.

He (and others) basically defend, that we need to have more guys focusing on how to protect systems, versus how to break/exploit them. He uses, as an example: ‘why do we need another example of how to exploit an specific variation of SQL injection instead of spending the resources figuring out how to fix it?’

The reality is that we need both. We actually need to have both researches activities/cultures, because showing us how to break something, is actually a great measurement of the current state of a particular problem (see also Jeremiah's Builders, Breakers, and Malicious Hackers).

The irony of all this, is that Jeff, in his Black Hat's Enterprise Java Rootkits presentation, become a BREAKER himself, and spend considerable resources and time showing in several powerful demos how dangerous is Java code executed outside the Java Security manager (i.e. outside a Sandbox). He is trying to do for Java what I tried (and gave up) on .NET (see Past research on Sandboxing and Code Access Security (CAS))

Today, just about everybody runs Java code outside a sandbox and the awareness that this could be very dangerous has still not reached critical mass. So for the people who are aware of this problem (i.e. Jeff), unless he starts hacking away real world targets to prove his point (not a very good idea in 2009) the only other alternative is to show what can be done and how bad is the current status quo. And by doing so, he is being a breaker.

Due to his past 'Builder vs Breaker positon' Jeff took a bit of slack for it, but basically he is showing / highlighting the problem and helping to raise awareness of its implications.

The main reason we need breakers, is because they actually show us the current state of the problem, since, the easier job the breaker has, the easier the target's can be exploited.

We also need 'non malicious' breakers because they actually show us what’s going on.

For example 'non malicious' (& with no criminal intent) virus writers!

I like to go on record to say that 'non malicious' Virus writers are actually our best friends, because they actually raise our overall level of security.

Although they may create short term havoc (and pain to the guys directly affected), they literally raise the level of security (for example 'forcing' people to patch existing vulns and making the eco-system more resilient)

This is why we need 'non malicious' viruses in the real world as well as the web world; they are actually benign and helpful (in the big scheme of things :) ).

In fact we (the good guys in the web application security world) should be paid to write viruses in order to push security up, but that is very unlikely to happen .... in the short term ... :)

Thursday, 20 August 2009

O2 Module presentation - Aug 09

Just uploaded a presentation to O2 Modules Presentation V1.0 which is the first time ALL major O2 modules have been mapped and documented :)

I will be adding this data to the multiple sections of the current O2 website.

Any comments?

Sunday, 16 August 2009

Past research on Sandboxing and Code Access Security (CAS)

Following a recent Twitter (@DinisCruz) thread, I realized that I had no post with links to my (failed) attempts to get Microsoft (and Sun) to allocate serious resources into Sandboxing of managed applications, and (specifically to .NET) into making Code Access Security work in the real world (i.e. figuring out what are the people, process and technologies required, so that it is possible to 'developt & deploy commercial applications & websites that run under a meaningful + effective Partial Trust environment').

If you are interrested in this topic, the best place to start is this PPT presentation (which was the last one I created): Making the case for Sandbox v1.1 - Dinis Cruz - SD Conference.ppt

Thursday, 13 August 2009

Links dump - 13 Aug

Here is a list of windows I had opened in Safari & Firefox that I want to keep for later reference:

Wednesday, 12 August 2009

Update on O2 & Ounce & IBM

As you probably know by now IBM bought Ounce Labs on July 28th.

What does this mean for O2 and all the Open Source research that I have been doing (and publishing) for the last 12 months? The honest answer is that I still don't know (i.e. I'm still not an IBM employee and basically just had my Ounce Labs 'independent contractor' agreement extended to IBM).

I had a great meeting with a number of IBM/AppScan guys last week in Ottawa (Canada) and will be back there next week to do an O2 presentation and talk about the next steps.

There has also been talk (via Jim Manico and Tom B (here and here) to move O2 to OWASP, maybe even calling it O3 (as in "OWASP O2" where O2 could also be used as a reference to Oxygen). As Jim correctly mentions, O2 is released under an Open Source license (Apache 2.0) and there is nothing preventing this move (to an OWASP project). I love the idea to move it to OWASP, but lets see what is IBM's finally position on this (since ideally they should officially support this new OWASP project).

In practice, these IBM presentations means that you will see (when compared with what exists today) a HUGE amount of O2 documentation being published to the O2 website.

As with most of you, the first question that the IBMers are asking is "I've heard about O2, but WHAT is it, and what does it do?" :)

I'm also going to extend the O2 Module that shows how to map "Ounce's scan results with HP WebInspect scan results" to IBM's AppScan and Rational. This should be an interesting (and powerful) demonstration of O2 capabilities (and btw, AppScan Standard Edition is an .NET Application :) :) : ) :) ... 'nuff said :) )

The good news is there is lots of excitement and energy to make the O2 + Ounce + AppScan + IBM integration work (in a way that keeps O2 Open Source and makes business sense for IBM). Several Ounce guys are being really supporting and helping to spread the word. See for example this post from Ian Spiro (Ounce / IBM and Ounce Open - O2 User Inside Track ) who had a 'major O2 epiphany' last week (I really like his "Open source + Knowledge = Control" idea).

So expect to see a lot of activity over the next week, and if you have specific questions to ask about O2, now is probably the best time to ask :)