Intrusion Detection Systems mailing list archives

Re: Assessment tools/Scanners


From: dugsong () monkey org (Dug Song)
Date: Sun, 10 Oct 1999 16:13:01 -0400 (EDT)



On Sun, 10 Oct 1999, Vin McLellan wrote:

this is evident from the ways we've found to trivially elude them.
 
        Would you mind elaborating on this point further, Dug?  

Ptacek, Newsham, and Paxson have already done a good job of defining a
functional IDS bug taxonomy: insertion, evasion (subterfuge), and denial
of service (state-holding, etc.). at this point, i'd argue that we need
published exploits against IDSs themselves to see any real improvement
(ex. how many IDSs were fixed after the SNI IDS paper? after the
publication of fragrouter on BUGTRAQ?)...

i'll leave it to other people on this list to offer specific examples of
IDS deficiencies - i know several people have encountered them in the
course of their own in-house evaluations, but have held off on publishing
them for other reasons (extremely restrictive eval licenses, mostly).

        With a rule-based system, how does one go beyond the list of known
attacks to alarm vulnerabilities as well as known threats?

this is one of the failings of misuse detection, and why anomaly detection
is so important.

NFR is getting backend filters from Mudge and the lunar luminaries of
the L0pht;  ISS has its Xtrodinary X-Force... Have these groups or
similar gray-cloaked warriors contributed to the state of the art or
pushed the envelope?

sure, as far as vulnerability definitions go. but the bleeding edge work
in IDS is being done in academic research - the commercial approach has
been to ape virus scanners.

        Why is it so difficult to develop an evaluation criteria which can
rate IDS packages in terms of which can (a) effectively generalize from
known exploits in order to place alarms on similar but not identical
attacks, and (b)  alarm areas of potential vulnerability, even if no exploit
has yet been published?

because no commonly accepted vulnerability taxonomy exists. initiatives
such as Mitre's CVE, or the Bugtraq BID are useful in terms of providing a
common alert namespace for vendor interoperability, but relatively useless
when trying to scientifically categorize vulnerabilities for ID.

        Do you know if anyone is doing this now to track the _future_
success or failure of these IDS packages in identifying novel attacks,
without generating a flood of false alarms?

the DARPA IDEVAL is attempting to do this by introducing novel attacks in
their test data, and rating IDSs by their ROC curves.

-d.

http://www.monkey.org/~dugsong/



Current thread: