mailing list archives
RE: [Logical vs. Technical] was Curphey award 2004 to SPI Dynamics
From: "Arian J. Evans" <arian () anachronic com>
Date: Tue, 29 Jun 2004 21:36:55 -0500
limits of the software security process as well. As you said, if a
human/developer has a difficult time identifying business-logic issues
in their code (cause its complex), how can an automated tool be
expected to find it?
I think one of the primary arguments *for* tools here is _time_.
Lack of time is how bugs get missed and QA cycles get skipped,
and it's how Super Secure Programmer makes mistakes. I've made
mistakes at 2am finishing the emergency project; I'm sure we all have.
So it's about the quality of the data from the tools: does the
automation save time or add time? And that's an area I sometimes
get frustrated with, is when the tool vendors combine low-accuracy
checks with high risk ratings. While I can (usually) contextually sift
through the data quickly, it scares the hell out of many clients who
run the tool on their own. They also have no idea how to sort through
the data and prioritize it into meaningful information.
Also note, even two completely secure blocks of
code and can be combined creating and insecure scenario.
I've given presentations about this where I categorize webappsec
vulnerabilities into two groups, Technical and Logical.
That's a nice distinction. I was going to give an example of where
tools consistently fail but you really summed it up. I'm thinking
of three 20^11 fuzzable parameters and a magic combination of the
three that unlocks the door. A scanner will never find the combination,
but a human eyeballing app behavior can play a shell game with
different valid values observed, and find the magic recombinant set
that initiates a valid action that was not intended to be allowed.
Completely Logical, and most easily identified through architectural
analysis, or behavioral observation/functional testing of the app.
The explanation I give to people who have frustration with tools is
this; I think the network vuln identification space has matured b/c
the variables in OS configuration/patch state are limited. There
are two big variables in custom applications that scanners will
likely always have trouble accounting for:
1. Developer Coding Style
2. Emergent Behaviors of apps/components bolted together (logical)
This is where many have said... "scanner suck because they don't find
everything". Though I think its simply better to say technology is not
Just like humans don't get bored or tired. I am sure everyone one
on this list to tests things tests 100% consistently all of the time. :)
(and normally I'm cautioning people that automated tools can't
solve all their problems, so now I'm arguing that automation can
(Disclaimer: My private opinions do not reflect the thoughts or
position of my employer on these subjects. etc. etc.)