Wednesday, 2 May 2012

Roadmap for Testing an WebService's Authorization Model

Now that Arvind is asking THE real questions (see What is the formula for the WebServices Authentication mappings?) , it's time to define the roadmap.

Here is what I think needs to happen when testing and visualizing an application's Authorization Mode, in a WebServices driven application like TeamMentor:

  • Create UnitTest that are able to invoke the WebServices methods with valid state: (i.e. be able to successfully invoke all methods)
    • This is not just a case of having good enough data (which in a lot of cases is dynamic (i.e. Method B needs data retrieved via method A))
    • ...one also needs to take into account the cases where the data will be destroyed or corrupted (remember that If you not blowing up the database, you're not testing the whole app )
    • ..which means that the execution order is very important, since we will need to support a solid set-up,  tear-down and restore workflow 
  • Invoke those methods with different users and roles:
    • You will need at least two user accounts per different role, so that you can test what happens between user A and user B (of the same role) and user A and user C (with different roles)
    • The data created needs to be exported into a format that can be consumed by a visualization script
  • Analyse the Source-Code and extract the real formula: (using static analysis technology) 
    • This should be in a format that can be cross-checked with the Web Method's tests
    • There could also be a 'hard-coded' mapping file that defines what are the values that are currently accepted by the application's bussiness logic (think of this as a consumable version of the Application's technical-spec/architecture)
    • Note that that sometimes we can already find Authorization vulnerabilities by reviewing these static mappings
  • Cross-Check the Static Mappings with the Dynamically collected data
    • First step is to make sure that the current code assumptions are actually happening in the real world (note that while the static analysis represents what the developers would like to happen, the dynamic analysis represents what actually happens!)
    • Find blind spots and create ways to codify them
  • Fuzz it!
    • Using something like the FuzzDB, add abuse cases  (i.e. payloads on valid method's invocations) and check that the expected rules and mappings still apply (also keeping an eye for other weird behaviours and vulnerabilities)
  • When vulnerabilities are found, integrate them with current bug tracking system
    • In some cases each type of vuln will need to created (or updated) as an individual issue
    • In other cases, we will need to consolidated them so that we don't create too many bugs
    • We will also need go get a risk analysis from the application owner, since some vulnerabilities might be more dangerous than others
  • Apply a fix and (quickly) confirm it via the created scripts
    • Ensure the developer can invoke one (or all tests) from their IDE
    • The scripts created must be able to reflect code fixes
  • Integrate these tests into the build/release process
    • So that they run whenever needed (every day, on git push, on release, etc...)
    • Note that that the tests will need to be executed on the multiple development, QA and production environments (with an easy way to diff the results)
  • Package visualization created and empower developers
    • Make sure the developers can access the visualization/mappings created
    • Hand-over and train developers in how to access, use and maintain those scripts, so that it is the developers (or QA's) responsibility to make sure they keep running and reflect code changes
  • Create visualizations of data collected over time

Now you might think that this is it and that we can stop here. The reality is that this workflow is still very inefficient, slow and hard to use by all relevant parties.

The way we should be creating these Authorization models, is by:
  • Creating a DSL (Domain Specific Language) that represents the current Application Business Logic and its Authorization rules
  • Getting the Application's business owner(s) to write the rules in the DSL language created (which they should understand and be comfortable with)
  • Feed those rules to the Static and Dynamic analysis tools/scripts
  • Analyse the data created and integrate the results into the SDL
Now THAT'S how Authorization models should be created, visualized, tested and enforced :)

Finally, for a great introduction to DSL and how to use it in C#, please take a look at this book: DSLs in Boo: Domain-Specific Languages in .NET (specially the chapter where the author talks about a Security-focused DSL)

Related Posts:

Post a Comment