Intrusion Detection Systems mailing list archives

Re: Assessment tools/Scanners


From: gshipley () neohapsis com (Greg Shipley)
Date: Mon, 11 Oct 1999 05:45:58 -0500 (CDT)




On Sun, 10 Oct 1999, Vin McLellan wrote:

        How does one develop a test suite that will identify the Failure
when an IDS module does not identify the threat behind a novel attack?  (Or
an old attack presented in a novel manner?)
*snip*

Wow - there are about a billion GOOD questions in here - enough to spawn
off at least a dozen sub-threads.  I'll try to throw some opinions out
here, maybe some of them will help....maybe not....dunno....

        With a rule-based system, how does one go beyond the list of known
attacks to alarm vulnerabilities as well as known threats?

Just to clarify terminology - when you say "rules based" you are referring
to the traditional "attack signature" model, and not something like ODS'
CMDS, correct?  That being the case, you can "judge" on a number of
things, but they are HIGHLY dependent on the target audience.  For
example, some of the things I've been looking at are:

- depth of signature-base (how much can it look for and spot?)
- architecture (does it scale?  How does management work?)
- level of polish (I have yet to come up with a good way of saying this - 
basically, does it have a useful interface, does it supply good
documentation, etc.) 
- capabilities (how customizable is it?  Can it do packet re-assembly?)

Now, those are things I think are important, but that might not hold true
for everyone.  Take something like Dragon - very easy to customize, and
still very raw....but it works.  Some people might not care that it
doesn't have an attack-tree like display, or that the documentation for
the attacks are in a separate file, etc.  Others may require that the IDS
snaps into HP OpenView.  While other may want to puke at the thought of
OpenView.....It all goes back to the question of what you need the IDS to
do.  But you are right, there are some common areas you can cover....

 
        (I have alway presumed that the need to identify areas of potential
vulnerability -- as opposed to known exploits -- was a major reason the
leading IDS vendors have hired or contracted with various gray-sombrero
hacker groups.  NFR is getting backend filters from Mudge and the lunar
luminaries of the L0pht;  ISS has its Xtrodinary X-Force;  and Axent has its
sharpshooter SWAT group.  Does this work? Have these groups or similar
gray-cloaked warriors contributed to the state of the art or pushed the
envelope?)

Good questions.....I wish I knew the answers.  :)  I would argue yes, and
no, and it's rather hard to judge.  Above all else though I would stress
the distinction between these organizations - grouping them together can
be dangerous.  First, consider the policies of these groups - L0pht, for
example, IMHO is pretty much no-holds-barred: they get the message out
there, no matter what.  As far as their NFR sigs, when those do come out
they will be doing the community as a whole a service.  But if you look at
how the X-Force, Russ Cooper, etc., operate, it's a two-edged sword:
"partial" disclosure can have a bad side.

I was discussing this very issue with Simple Nomad at SANS just a few days
ago - he brought up some really good points.  Say, for example, I find a
new hole in NT/IIS.  Now, depending on my background, I may give the
information to my friends, I may report it to BUGTRAQ, I may contact MS,
or I may sit on it and use it for my own deviant practices. Now, depending
on your "orientation" - if you catch wind of such a hole, you may sit on
it and wait for the vendor to release a patch, or you may document it and
code it into your own security software (thereby giving you and edge over
the competition), or you may release it to the public, or....

The stakes go up if you are a sysadmin and you find the exploit code "in
the wild" - because then you KNOW it is being used.  So if you are into
"partial disclosure" - as some organizations are - you could arguably be
doing the security community as a whole a disservice by NOT sharing the
information.

In short, I don't think anyone will argue the need to organize, sort, and
generally get your hands around vulnerability data.  That is needed, and
anyone doing ID stuff needs access to this type of information.  But I
think its important to understand the motivation behind who does what,
and who releases what.

But this is perhaps beyond the scope of this discussion and this list, so
I will end this here....

        Why is it so difficult to develop an evaluation criteria which can
rate IDS packages in terms of which can (a) effectively generalize from
known exploits in order to place alarms on similar but not identical
attacks, and (b)  alarm areas of potential vulnerability, even if no exploit
has yet been published?

Welp, like I think Dug Song touched on, you would need to agree on, at a
minimum:

1) a standardized and universally accepted list or DB of known
vulnerabilities.
2) a set of tools to test/exploit those vulnerabilities

Now, owners of sites like SecurityFocus, packetstorm, technotronic, etc.,
are GREAT for (2) because they WILL supply exploit code - but even their
archives are fairly limited.  I don't believe SANS, Mitre, etc., are
interested in releasing/distributing exploit code....but I could be wrong.  
Basically, a LOT has to happen before we can get to (1) or (2), and IMHO,
we need both to perform this type of testing...but this is just a small
piece to the big picture.

Now, if you look at the CVE project (http://cve.mitre.org) I believe they
have identified, what, 663 known "Vulnerability and Exposures" and
standardized naming on some 300 right now.  But such efforts are open to
interpretation.  For example, how do you classify Microsoft's
ODBC/RDS/MDAC problems?  They are, IMHO, MAJOR issues because they do
allow for an administrator/system_local level remote compromise, yet none
of the current ID products will detect the MDAC/RDS attack.  And what
about the boatloads of CGI holes?  rfp has claimed in the past that he's
hunted down 100 some bad CGI programs....I *know* all of those aren't in
CVE, the X-Force DB, etc.....

So we don't have (1) or (2) yet, which IMHO, are badly needed.

        Has anyone tried to evaluate these product historically, to see what
percentage of new and novel attacks  have been caught because one or several
IDS packages reached beyond the list of known exploits and detected
anomalities? (Too early for this, maybe?)

Not sure what you are asking here - almost all of the commercial ID
products are looking for knowns, and knowns only.  They are
signature-based.....

The advantage of the NFRs and Dragons of the world is that you can, fairly
easily, code up your own "sig" for an attack.  I know ISS RealSecure v3.2
has some of this flexibility, but not as flexible as, say, NFRs n-code.
Are there any products that do network-level statistical profiling, to
look for "unknown" attacks?  Not that I know of, but that's just me....

        Do you know if anyone is doing this now to track the _future_
success or failure of these IDS packages in identifying novel attacks,
without generating a flood of false alarms?

I'm working on some of this data now, but I'll put this question to the
list: what would you guys LIKE to see?  IMHO, the only way to thoroughly
validate a vendor's set of signatures is to run each and every attack past
them.  And to do so, you either have to possess or write exploit code for
every check.  And even then, make no mistake, you CAN mutate attacks to
the point that network-based ID will fail.

And hell, as Dug pointed out, if you pipe stuff through fragrouter you'll
get past almost everything but NFR and Dragon.

It's tough, right now, to standardize this.

-Greg



Current thread: