Intrusion Detection Systems mailing list archives

Re: BlackICE IDS


From: gshipley () neohapsis com (Greg Shipley)
Date: Sun, 5 Dec 1999 17:28:55 -0600 (CST)




On Sat, 4 Dec 1999, Robert Graham wrote:

Thus, you have a clear demonstration of the benefits of an appliance over a
generic OS. However, as I demonstrated in my previous e-mail, packet capture
wasn't the limiting factor in our dual-CPU configuration. Appliance style
point-optimizations wouldn't make much difference in our case, unless we were
running on a single-CPU system.

In any case, the Nov. 15 Networking Computing has a performance chart of
BlackICE v1.0 vs. the NFR v4.0 IDA:
http://www.nwc.com/1023/1023f19.html
Your original statement was "I mean, if the NFR IDA can't do 140k packets a
second, how do you expect some Windows system to perform?" with the implicit
assumption that Windows couldn't possibly be faster than an imbedded appliance,
but this test showed the opposite. Now lots of errors creep into magazine
reviews, so I'm not claiming BlackICE is significantly faster than NFR (I
haven't run it myself to be sure), but I'll bet it isn't any slower to any
significant degree. (Also note: this test was on a single CPU system, not my
dual-CPU tweaked test-bed).

I'm not sure how much I'll be able to contribute here, but as the writer
of the above article I'd like to interject just a few things:

First, we ran, and re-ran those tests MANY, MANY times, so I stand by
those numbers and am willing to help anyone who wants to try to reproduce
them.  (be prepared for some serious time and headache however - what a
pain).  What I utterly *FAILED* to do was adequately explain those charts.  
For anyone who is looking at that article, I'll attempt to clear some
things up here.

If you look at the "Network IDS failure points" found at:
http://www.nwc.com/1023/1023f19.html

You see three bars.  To explain these a bit further:

- Start of frame dropping.  This is where our testing demonstrated that
the ID system started dropping frames.  I'll use the winnuke attack as an
example.  Now, someone more technical from the vendor side could probably
explain the actual signature, but from what I can tell a single "winnuke"
attack consists of between 5-7 packets, depending on the binary.  This bar
shows when ID systems would stop detecting SOME attacks (like winnuke),
but continue to detect others (ftp bounce, wu-ftpd exploits, zone
transfers, etc.)  My conclusion is that this is the preliminary "breaking
point" for some signatures/products, as they start missing certain attacks
but still catch others.

- Start of signature failure.  The description of this was badly mangled
during the edit process.  What this was TRYING to show was the point that
the ID system would completely fall on its face and stop detecting the
bulk of the attacks.  BlackICE surprised us here as we could not get it to
puke using normal traffic and signatures (non-fragmented attacks).  We
were using the sniffer-based version, so it was functioning outside of the
host-only level.  Take from this what you will, but we used the SAME
IDENTICAL HARDWARE (with the exception of NetRanger - Cisco shipped their
own unit) for ALL PLATFORMS.  So, while IMHO I think that an "appliance"
based approach should be capable of out-performing something running on a
base OS, that is not the case, today, in the marketplace.  Both the NFR
unit (note, however, that NFR was deployed on OUR hardware and not NFR's,
so their hardware may perform better) and the NetRanger unit dropped off
the scope before some of the other products (i.e. Dragon on Linux,
BlackICE on NT) did.

And I'll include my rant on the term "appliance" in another e-mail.  :)

- Fragmentation re-assembly failure.  Some products do fragmentation
re-assembly (NFR, Dragon, BlackICE), most do not (NetRanger, RealSecure,
NetProwler, Centrax, etc.).  For this round we used Dug's fragrouter and
pumped our attacks through it - laying down the same loads of base traffic
(un-fragmented) we did for the other two.  What you see in this bar is
where the ID systems stopped successfully re-assembling traffic, and the
attacks went by unnoticed.

----------------------------------

What we ran on the LAN to gen traffic: WebRamp, Filemetric, and Chariot.
WebRamp and FileMetric are actually passing REAL traffic.  Chariot passes
traffic that looks real (TCP sequence numbers are accurate, it doesn't
just do "playback") but the payload is not "real."  For example, two
chariot endpoints running over port 80 will be passing garbage in the data
paylod, not real http requests.  The ID systems will have to look at it
though, regardless.

So I don't know if that helps clear anything up, but I'd like to introduce
two other points:

1. You can theorize what things SHOULD and SHOULD NOT do, but without
testing them you have nothing more then theory. 

2. I would encourage anyone who is doing testing to get as close to REAL
traffic as possible.  In this latest round I could not put these things on
a "live" network, but the article I did earlier this year I did:
(http://www.nwc.com/1010/1010r1.html).  The issues surrounding false
positives grows ten-fold when you start looking at live traffic.

Just my .02,

-Greg



Current thread: