Home page logo

dailydave logo Dailydave mailing list archives

Re: Does Fuzzing really work?
From: "Peter Winter-Smith" <peter () ngssoftware com>
Date: Mon, 25 Sep 2006 21:21:26 +0100

Hi Aviram,

I can't speak for Dave, but I felt that his note "There are no new MSRPC 
bugs. You should give up looking for them" 
(http://seclists.org/dailydave/2006/q3/0160.html) was probably given more in 
the context of the MSRPC fuzzer that he had published (and/or provided with 
CANVAS/SILICA), and knowing what I do of Dave I suspect was more of a 
joke/challenge than a definitive statement ;-)

The research looks very interesting however, in those figures that you gave 
to what degree do you take account for subsets of data that you are testing 
(fields within a given portion within a given protocol, and the format of 
the data that they can accept), etc, and the valid common interesting bad 
values which can typically be used in such circumstances (i.e data which 
conforms but has often been known to cause problems - strings of specific 
lengths, given sets of integer values which often cause problems, etc)?

Just interested what makes up the numbers! :-)


----- Original Message ----- 
From: "Aviram Jenik" <aviram () beyondsecurity com>
To: <dailydave () lists immunitysec com>
Sent: Monday, September 25, 2006 8:35 PM
Subject: [Dailydave] Does Fuzzing really work?

There's a lot of talk lately on whether fuzzing can actually be used to 
vulnerabilities - and more importantly, reliably rule out the existence of
unknown vulnerabilities.

Since most of this talk revolves around Dave's note "There are no new
MSRPC bugs. You should give up looking for them" I thought this was the 
forum to answer this question.
The question was whether RPC fuzzing can really rule out vulnerabilities, 
our experience shows it can (at least, as much as you can rule out 
in IT security).

Let me throw some numbers at you(*). The FTP protocol has 310 "scenarios" 
valid FTP sessions. If you try to overflow each time a different part of 
command in every scenario you get a little over 12M attack combinations. 
you use some of our nifty beSTORM 2.0 optimizations you get to 70,679 
vectors. Even with the lamest FTP server allowing just 5 simultaneous
connections and taking a full second to process each session it would take
only 4 hours to fully test the protocol.

FTP is too simple you say? With more complex protocols like SIP you have
15,000 scenarios and something like 40,680,459 attack vectors after
optimizations. Sounds scary at first, but a SIP server capable of handling
500 requests per second would take only 22 hours to test, which means you 
leave it running when you go home for the weekend and come back for the
results. If you don't feel like waiting 22 hours, put it on 5 machines and
have an answer by 4 hours. If you don't feel like waiting 4 hours... well 
get the point.

HTTP is probably as complex as they come, but most servers can handle 
requests per second in a closed environment and a fast local network.
Suddenly trying all HTTP combinations is not as hard as it seems.

And so on, and so on.

My point is to those people who mock fuzzers - you either tried the wrong
kind, or you tried them a long time ago. I'm not saying that buffer 
are suddenly obsolete (don't delete that ZERT bookmark just yet!). But
nowadays there is no reason for an FTP server to come out with buffer
overflows; there's just no excuse.

(*) Don't believe the numbers? Check the URL below and see for yourself.

Aviram Jenik
Beyond Security
(703) 286-7725 x504


Looking for Unknown Vulnerabilities?
Dailydave mailing list
Dailydave () lists immunitysec com

Dailydave mailing list
Dailydave () lists immunitysec com

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]