Saturday, 21 November 2009

Mr Security Consultant: 'Are You Doing A Good Job' for your clients?

I sometime feel that our industry misses the point on what we (security professionals) are doing here.
In a nutshell, the current 'Web Application Security assessment' world is far from being ‘working’ (see AppScan 2011 for a fictitious story about what (from a technology point of view) these engagements should deliver)
Security Engagements (namely Web Application ones) should not be seen as a games of ‘cat & mouse’ where the ‘ethical attacker’ is trying to break the system!!! (and ultimately prove the client that they (the security consultants) are any good)
My view is that security engagements are ‘knowledge transfer exercises’ where people with specific knowledge in one area (Web Application Security) are helping as much as they can, the people who don’t (Managers, Software Architects, Developers, Clients, etc...), during the short time period that they are involved with the application (i.e. the ‘security engagement period’)
The ultimate goal is Risk Reduction with the “Owners, Builders, Buyers & Users” of the target applications being able to make knowledgeable decisions about the security profile of their application (this is what we at OWASP call ‘visibility’).
To play a 'game' where these experts (i.e. the Security Consultants) are NOT provided AS MUCH INFORMATION AND SUPPORT AS POSSIBLE during their engagements is frankly: inefficient, unproductive and expensive.
Now talking directly to my peers (the security consultants), regardless of the type of test that you are doing, black-box or white-box (and the time allocated to it), sorry, but you are NOT doing a good job for your clients if:
  1. you don’t have access to the source code
  2. you don’t have access to a live instance of the application
  3. you don’t write unit tests for your results
  4. you don’t understand the client's business model
  5. you are not writing WAF rules or patching the app
  6. you are not giving the developers ‘auto code fixers’
    And here is the bottom line; The measurement of our success should NOT be how many vulnerabilities were DISCOVERED, but how many vulnerabilities were FIXED (or MITIGATED) by the client
    We will be doing our job If we are able to implement workflows that allow developers to easily & quickly fix, deploy and test the reported vulnerabilities.

    The rest of this post will look at each of these 6 requirements individually:

    1) If you don’t have the source code, then you are not doing a good job.  
    Regardless of whether you use tools, or if you do it by hand, when doing a black-box assessment, lack of access to the application’s source code you will make you very inefficient.  
    Having access to the source code gives you the ability to understand what is going on and to write proof of concepts much more quickly, efficiently and safely (hands up who have 'bricked a server' or application during a penetration test engagement).  
    It is vital that the client understands the importance of giving you the code. When you are doing a black-box engagement you need to show (in the short-time allocated to the project) to your client what the problems are (and access to the source code will allow you to use your time more effectively).   
    If the client does not have access to the source code of the applications you are testing, that in itself is could be a problem (especially if the client paid for its development)
    Note that when dealing with managed languages like Java or ,Net, one can even get away with only being given access to the application DLL’s, WAR and config files (in most cases a zip of the target web folder is all that is needed)

    2) If you don’t have access to the live instance of the application, then you are not doing a good job.  
    Here is the reverse; if you were doing source code analysis, and, you have access to the code, but you don’t have access to a live instance of the application, you will also not be able to do as good of a job.  
    Because even if your focus is on the static analysis or source code analysis, you need the black-box approach and access to the application so that you can quickly:
      a) understand how the application works, 
      b) understand if the issues your are finding are actually exploitable, and
      c) pragmatically measure how much coverage & visibility your static-analysis efforts (manual or automated) really have
    Please note that you don’t have to find, exploit, write and document a proof of concept for every single problem that you have (just once per vulnerability type or pattern).  
    Since vulnerability exploitation is a good measurement of the exploitability-level of a particular vulnerability, I am a great believer that you need to show (from business owners to developers) these exploits in action (one exploit per insecurity-pattern).

    3) If you don’t write unit tests for your results, you are not doing a good job.  
    This scenario is applicable to both black-box and white-box.  
    The code idea here is that Unit Tests are something that the developers understand.
    A unit test is a repeatable mechanism that allows you to replicate what you have done (i.e. the process of identifying and/or  exploiting the vulnerability).  It can be a positive test or a negative test. You can have a unit test that tests for something that is there or something that isn’t there (see AppScan 2011 for an example of what this could look like in practice).  
    From a security point of view, you should be writing unit tests that fail until the application is secure.  
    This is a great way to communicate with developers and gives management visibility to what is going on.  It also:

    • allows managers to have measurable deliverables,
    • allows the developers to understand where you are coming from and be able to visualize what you are telling them.  
    • allows QA to be able to replicate the problem and confirm its resolution 

    Until you give a developer a unit test, they are unable to relate to what you are doing

    4) If you don’t understand the client's business model , you are not doing a good job.  
    This is very important!  
    In order to provide recommendations to the client (that makes sense to them from a business point of view),  you have to understand the target application and the way the client's business works.  
    If you don’t understand the client's business model, what risks they care about and what is their history in Web Application Security, then you are 'talking in a bubble' and somebody on the client's side (who is probably less prepared than you) is going to have to try to figure out how what your 'mumbo-jumbo-tech-talk-and-presentation actually means to their business.
    Note that from a technical point of view, you (the security consultant) have a much better understanding of the security implications of the issues reported. If you are able to allocate enough time to understand the client's business model, you can cross-map both worlds and give the client a much more accurate representation of that application's risk profile (and what should be done next)

    5) If you are not writing WAF rules or patching the app, you are also not doing a good job.
    The power of writing WAF (Web Application Firewall) rules, is that you are give the client a short-term solution for the problem to be fixed (or depending on the problem and patch, a medium to long term solution). 
    This is very important because when you get into virtual patching, you allow customers to quickly mitigate or reduce the risk, and gives them some breathing space plus the ability to strategically think about what they want to do.  
    It even gives them the ability to not fix it, if that’s what they decide (i.e. they accept the risk).  
    Either case, you have done your job – i.e. you analyzed the application, found security issues, provided practical remediation measures, and helped them (the client) to reduce their  risk exposure.  
    Once the marked evolves a bit more, I think that WAF writing rules, and WAF rules verification will become another profitable service provided my Application Security Consultancy companies (as a preview of how this market will also need to be played under an Open Source umbrella, check out what Breach is doing with the OWASP ModSecurity Core Rule Set Project). 

    6) If you are not giving the developers ‘auto code fixers’, then you are also not doing a good job.  
    A security consultant, especially one that understands programming, is in a much better position to evaluate the security implications of the multiple strategies & techniques that could be used when fixing (at the source code) a particular vulnerability.
    One of the areas that I want to spend resources in the future is actually writing 'auto-code-fixers'.  These 'code aids' would go into the developer IDE and would be exposed like the current IDE's code fixing/re-witing features (I wrote a very sweet PoC for Rational's Software Analyzer product which loaded up an 'O2 massaged' source-code file and provided the developer the option to fix one of the reported findings).  
    Of course some people are not conformable with providing direct code snippets to developers which could end in production environments, (and the developer & its boss will need to tick the box that says ‘I accept responsibility for this’), but by exposing this information to the developers, there is a much better chance that all relevant parties will gain a much better understanding of the root causes of the issue reported, and the suggested (from a security point of view) solutions. 

    Why I had to build O2?

    I had to build O2 because the state-of-the-art tools (both commercial & open-source and both white & black box) where not designed for knowledgeable web application security consultants (like me).

    There is a reason why the adoption rate of these tools is very LOW (by security professionals, developers, software architects, etc..), and even more importantly, there is a a reason why even when they are used, very few people actually get decent (& actionable) results from it. Of course that the sales & marketing departments paint a different story, but most of the current sales result in shelf-ware (and if you have doubts on this statement, I just have one word for you: Frameworks)

    In addition to:

    1. lack of support for Frameworks like Struts, Spring, Enterprise Library, ASP.NET MCV, (heck, most don’t even ‘properly support’ J2EE’s or ASP.NET’s request execution flow),
    2. the customizations made to those Frameworks, and
    3. custom or ‘client / vendor specific’ Frameworks
    ... the reason why those tools don’t work in the real world, is because they (currently) don’t ‘understand’ how the target application works.

    For example, when they DO provide a finding, that finding will only cover a very small part of the entire code flow that creates that vulnerability (for example the URL with the exploit or the internal Source-Sink trace).

    This is why the market perceives these tools as NOT working, and why the security professionals (who should be its MOST active users and promoters) look down on them and ignore them.

    Remember that my objective on my security engagements is to ‘Automate Security Knowledge and Workflows’.

    This way less experience users will be are able to replicate my actions and fix, mitigate or accept (the risk of) the security issues on their applications.

    Application security will never scale if we required everybody to be security experts!!

    Back to O2...

    Historically O2 was built on top of the Ounce’s Labs (now called AppScan Source Edition) product when I was hired (in my 2007) as an independent consultant, and was tasked of using their tool on ‘service-driven’ engagements and provide feedback to the product team.

    After getting my head around on how the Ounce Engine, I was in love with its data-flow analysis and wide coverage (since I was used to doing it by hand), but was very disappointed by its lack of support for Frameworks and for ‘building custom analysis’ on top of those findings (which remember, only represent a small part of the ‘real’ traces & exploit flow).

    So having a programming background, I did what every security consultant does today.

    I wrote scripts ...

    And more scripts & command line tools...

    And more scripts & some GUIs ...

    Who eventually become so complex and feature rich, that I decided that I needed to build a host for those scripts, tools and GUIs.

    And that is when O2 was born :)

    In fact, originally this tool was called F1 (as in the ‘F1 racing car’ vs ‘the normal cars that run on the road’), and was renamed O2 (for Ounce Open) when the Ounce Labs guys made the decision to allow me to Open Source it (which happened Nov 08 (last year) at the OWASP conference in NYC)

    In the beginning, O2’s capabilities were almost 100% dependent on the Ounce’s engine (since originally O2 (i.e. F1) was designed to automate and increase it capabilities). So at this stage, one could not use O2 without a valid (i.e. paid for) Ounce Engine.

    Eventually, as O2’s capabilities matured and (aided by the fact that I was doing other Security Engagements outside of Ounce where I was using & developing O2), the number of features that did NOT require Ounce’s commercial license started to grow. Eventually taking O2 to a level that enormous value can be obtained by ALL users and making O2 worthy of being an OWASP project (and being called ‘A Platform’).

    Today (Nov 09), O2 has reached a maturity level where I (Dinis) can finally perform security engagements with a type of visibility and automation that I could only dream off a couple years ago.

    There are a small number of people (me and the few brave O2 users) that get a LOT of value from O2, the challenge now is to make this scale, and dramatically simplify O2’s workflows so that it can be easily used by new users.

    OWASP Newsletter - Nov 09

    This OWASP Newsletter - Nov 09 is a great step forward for OWASP,

    After a couple half-baked efforts in trying to get OWASP Newsletters in the past , we finally seem to have got it right. 

    Lorna and Kate did a great job on this first issue of the new generation of OWASP newsletters (which I hope will follow the same level of professionalism and regular publication schedule that we achieved with the OWASP podcasts).

    Here is the email sent earlier today by Kate (to owasp-all, OWASP LinkedIn group and a number of other WebAppSec mailing lists):

    After several months in development we are excited to release the first of many OWASP newsletters! We hope you will find the content relevant, interesting, and motivating. Many thanks to Lorna Alamri from the Minnesota chapter for putting together this document.

    As always your feedback is appreciated and if you have articles for upcoming newsletters please forward the information to Lorna at or to me

    Thank you all for your support!

    Kate Hartmann
    OWASP Operations Director
    9175 Guilford Road
    Suite 300
    Columbia, MD 21046
    Skype: kate.hartmann1

    Public reactions to last week's posts

    Following last week post Update #3 on O2 & IBM , I received quite a lot of feedback (both publicly and privately). Finally it seems that people are taking a good look at O2, and due to the public nature of these posts, I am reaching a much far wider internal audience at IBM than it would be possible if I keep these thoughts private.

    Request for help on: OWASP O2 Platform

    (Posted to the owasp-leaders list on 17th/Nov/09)

    Hi there, in case some of you missed this last week, just before my OWASP O2 Platform Presentation at the AppSec DC conference last week I posted 4 blog posts on O2, IBM, and what I think should happen next:

    As you can see, I have moved O2 to OWASP and am driving 100 miles-a-hour into making the OWASP O2 Platform THE standard 'lingua-franca' between multiple Application Security tools (allowing a type of Human+Tool type of analysis, workflow and automation that most people in our industry think it is impossible).

    As R'Snake's says in his comment  this is a great opportunity for IBM. The only way we will have a number of standards in our industry, and any decent tool interoperability, is if we do it openly and collaboratively, with OWASP and O2  strategically positioned to do lead that effort.

    IBM's return or investment is the fact that O2 will make it easier for users to use their products (which leaves the user in a position that they can chose the best tool for the job without worrying those tools (Open Source or Proprietary) talked to each other).

    What I like about the Part I - IBM Application Security related tools & "AppScan 2011"  post - and ignore the IBM references (or replace them with  Open Source or Proprietary equivalents) which are there to show that I could implement most (if not all) of that workflow today using available products and a numbers of O2 Scripts - is that it:

        a) shows the complexity of real world engagements (and I would argue that even that example is a VERY simplified version of reality)
        b) how we are so far away as an industry to 'communicate' and engage with out clients in a way that they get the maximum return in their investment in our services (and improve their security risk profile)

    If you are not interested in O2, IBM or what I am doing, you should at least read the 2nd part of this post
    Part IV - O2 needs to be Commercially Supported and John Steven's blog post on Vendors in an Open-Source Security Community

    The only way OWASP materials will be used by the people that matter (big companies, small companies, software developers, framework developers, governments, etc...) is if OWASP materials can be 'consumed' in professional, efficient and productive way.

    And just like commercial vendors like Red Hat & IBM made the Linux 'commercial ecosystem' work, to really succeed in its mission ("... make application security visible so that people and organizations can make informed decisions about application security risks...") OWASP needs to create a healthy ecosystem of commercially-driven companies (maybe even government or grand funded external organizations) that support and drive is most successful projects.

    Of course that we have to be very careful about how we do this, since we have to make sure that this is done in a way that is 100% compatible with our values. Ironically, the two efforts that are probably closer to this reality (an OWASP project commercially supported by a 3rd party company) are two projects lead by two OWASP Board Members: me with O2 and Jeff with EASPI.

    I think both me an Jeff have the political capital inside OWASP to have some margin for maneuver in creating, testing and fine-tuning the model.

    The good news is that, IF (and it is a big if) we get this right, there are a LOT of OWASP projects that should follow the same path.

    OWASP Project leaders, imagine if you could work for a company that commercially supported your OWASP Project (Tool or Document) and paid you and others to work exclusively on that project and release what was created under OWASP?

    Of course, that if we (me or Jeff) screw this up, and the OWASP community thinks we lost our independence, then we can no longer be Board Members.

    Disclaimer: I'm using Jeff as another example of what I am trying to do with O2 since it is a very similar scenario. BUT, just for the record, as far as I know, Jeff's employer has NOT decided (so far) to commercially support EASPI, and they might never go down that path (that said, I think they will, since at the rate EASPI is maturing, it will just be a matter of time before somebody else (individual or company) gets the funding to do it).

    So here is my request to you (owasp-leaders): Please help me convert the materials created by your project (tool or document) into O2's Open Schemas so we can consume them from a central location (and when applicable be able to 'consume' O2's Open Schemas so that your project can benefit from artifacts created by other OWASP projects). Of course that there is a lot more to O2 than this first step, but achieving good interoperability between OWASP tools would be a great step forward.

    As I explained in my previous email (subject was "Fwd: [Owasp-o2-platform] [SC-L] Static Analysis Findings"), one of O2's powerful features is its ability to quickly consume and process results from external tools.

    I'm happy to help you, and I am sure you will be pleasantly surprised by how easy it is write these parsers (for example Matt Tesauro, can vouch how I wrote the O2 WebScarab Log parser in a short-period, while attending the OWASP Brazilian conference (The objective of that exercise was to show how O2 could create reports based on the special tags supported by the latest version of WebScarab (not the NG one) ))

    A final comment that I would like to make about IBM.

    My feeling is that they, (IBM) want to do the right thing and support O2 (remember that there is a good historical precedent with IBM's support for key Open Source projects like Eclipse (see for tons of more examples), BUT they (IBM) are not sure/convinced about O2's ability to generate a vibrant and productive community.

    So ironically, at the moment YOU (owasp-leader or O2 user) are more important for the short/medium-term future of O2 than I am :)

    Thanks for your help,

    Dinis Cruz


    Friday, 13 November 2009

    Update #3 on O2 & IBM - 13 Nov 09

    I just posted a number of Blog posts related to O2, Ounce Labs and IBM 

    See also:
    I am quite interested in your (the reader) thoughts, so please comment here or email me directly.

    Dinis Cruz
    (@ the OWASP AppSec conference in DC)

    Part IV - O2 needs to be Commercially Supported

    The OWASP O2 Platform is now reaching a critical mass moment where it really needs to be officially supported by a commercial entity:
    • there are a number of corporate users who have used it, love it, but are very worried about its current support model (which is basically me and Ian Spiro)
    • There are a number of commercial and very profitable revenue streams that can only occur if there is an infrastructure & ‘machine’ behind O2
    • O2 has already reached a technology level & quality where it is adding spectacular value to security consultants. The problem is that the current presentation and support level are very basic and non-professional
    • there is a lot of functionality in O2 which just needs to be documented so that new users can find it and know how to use it
    • there are a number of small bugs and issues that need to be solved

    Part III - Why I said NO to IBM ... for now

    Following the Ounce Labs purchase by IBM last summer (see Update on O2 & Ounce & IBMUpdate #2 on O2 & IBM - 02 Sep 09), I have been trying to figure out where is the best place for me and the OWASP O2 Platform in IBM’s world. 

    Part II - Why IBM will ‘solve the problem’

    As one can see from Part I of this post series, IBM is current spending considerable resources and investment in the Application Security space. 

    The question is, will they ‘solve the problem’? I.e. will IBM with all this investment create products (in the next 1 to 2 to 5 years) that will REALLY allow complete, thorough and maybe even ‘scientific’ analysis of Web Applications (& all its dependencies)? 

    Part I - IBM Application Security related tools & "AppScan 2011"

    To start this series of O2 (i.e. the OWASP O2 Platform) related posts, I would like to provide an example (using existing IBM products) of what an ‘Application Security Assessment’ should look like. 

    Tuesday, 10 November 2009

    New O2 Code Drop (09-Oct-09): Struts support, XRules, O2 Config, Search Engine, etc...

    (email sent to the owasp-o2-platform (subscribe here))

    Welcome to the OWASP O2 Platform mailing list (this is the first post to this list :) )

    FYI, I just uploaded to the O2 website a new code drop of the latest updates:

    There are a LOT of new features (which I will try document in follow-up posts), for example:
    • Almost complete Struts support: Import and visualization for web.xml, struts-config.xml, tiles-definition.xml, validation.xml (see the O2StrutsMapping visualizer and exporter)
    • New XRules engine. This is very BIG since for the first time it is possible to write complex rules in a fully dynamic way in O2. For example it was using the XRules module that I was able to create a trace that reads the struts configurations (i.e. the O2StrutsMapping object) and does all sort of mappings between the Action Controllers, the JSPs views and the Ounce's Traces
    • New O2-Config Gui which allows to set up internal config variables (like the Temp Folder). This also includes a sort-of DI (Dependency Injection) which can be used to set up (on load) any static property exposed by O2 Modules
    • Major changes to the O2 Search Engine tool , which makes it REALLY useful (I tend to use it all the time now). For example you can just drop an entire folder (with Gigs of data) and quickly find a file's location , or you can then filter by type of code (.NET or Java) , index it, and do a quick regex search on it
    • DotNet assembly patching using PostSharp. The current version already support a complete workflow of marking an assembly (via Cecil) with specific attributes which are there used by a custom PostSharp script that will Instrument (ala AOP) the dll and place it into the GAC. I have used this version to successfully apply a patch in a vulnerable AspNet application (by 'patching' the vulnerable function in the GAC deployed dll). This version also supports a basic Function Enter/Leave logger, which will be expanded on the next version to be able to create Findings based on the execution flow (just like the current version of the O2 Debugger does (exposed on via the O2 CSharpScripts module))
    • WebScarab: Added support to O2's Findings Viewer to import WebScarab log files (the original version of WebScarab , not the NG one)
    • O2 Findings module: Added ability to save & load the current O2Findings into a binary serialized format
    • O2 Join Traces module: Add GUI to join Ounce generated traces based on interfaces implementations
    • Number of bug fixes and minor changes (like exposing the Ounce MySql IP and address and Port on the Rules Manager)
    • Renamed a number of O2 Modules *.exe files (to make them easier to find)
    • .... I'm sure there is more but I can't remember... :)
    Here are the main links:
    Please try them, and let me know what you think of it

    Dinis Cruz