Dailydave mailing list archives

Re: It jerked and it berked but the thing really worked!


From: David Molnar <dmolnar () gmail com>
Date: Tue, 24 Feb 2009 16:07:18 -0800

On Mon, Feb 23, 2009 at 2:45 PM, Dave Aitel <dave () immunityinc com> wrote:

Lots of new security technologies look like this:

[some automated process] ---->  [some small team of really skilled
people] ----> results. This is great stuff, but it's not "scalable" in
the sense that the sales team will imply.


To hijack the thread a bit, this feels like a key area for research. To
start with, we have techniques like fuzz testing that get us a lot of the
way there by providing actual test cases to developers, but right now it is
hard to figure out automatically if the test cases produced are
 0) important (i.e. exhibit an exploitable bug),
1) not a duplicate of a previous bug

For 0) I like Pusscat's Byakugan work, there's also this work by Lin et al.
http://www.cs.purdue.edu/homes/zlin/file/DSN08.pdf

Of course we won't ever be able to match a true expert, but we can provide
some value and triage. For 1) I can think of a few heuristics (and have
implemented two) based on "fuzzy" stack hashes, but I don't have a lot of
evidence that any one of them works well at separating out "distinct" bugs
without leading to too many duplicates.

What are other "human bottlenecks" for everyone's favorite analysis
techniques?
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave

Current thread: