IDS mailing list archives

Re: NIPS Vendors explicit answer


From: Vikram Phatak <vphatak () lucidsecurity com>
Date: 23 Apr 2004 21:36:30 -0000

In-Reply-To: <1082383536.1153.71.camel@note>

Hi Christian,

Okay - clear warning - I am CTO of an IPS vendor.  I couldn't resist the challenge :-)  I will try to address your 
questions directly without plugging our product too heavily...

First, I think the reason nobody else has answered is that (statistical) anomaly based systems that "learn" based upon 
network usage are not very good at dealing with attacks unless there is sufficient volume to indicate some pattern of 
usage (or behavior) has changed.  Okay, now on to the answer.  Our product (ipANGEL) is not a statistical anomaly based 
IPS, but I can address many of the questions you posed in any case.

From our perspective the key is in writing the rules based upon the underlying vulnerability or specific behavior 
whenever possible. About 80% of our rules are based upon the vulnerability.  This represents around 95% of attacks 
since there can be multiple exploits against a single vulnerability.

Let me take a moment to define terms.  I have found far too many people use the terms exploit and vulnerability 
interchangeably when they are two different things. 

Vulnerability – A bug in an application or an OS that enables unauthorized and unintended access or privileges the 
application or OS.
Exploit - A method for taking advantage of a vulnerability.

1)  Now back to the specific question...  Let’s start with the "Superfluous Decoding Vulnerability in IIS" that you 
cited.  It is in effect a directory transversal attack that can execute arbitrary code.  There are two ways we can deal 
with this attack that each has its benefit and its drawbacks...

The first way of addressing this attack is by preventing the general behavior in question; In this case directory 
transversal.  Either "/." or ".." is always part of the attack since the overall goal is to get above the root of your 
web server and execute a command.  We can prevent directory transversal attacks by denying user/client initiated 
content to a web server that consists of either "/." or ".."

The problem with this approach is that that if a web site programmer writes sloppy code that that uses ".." in the 
links, the IPS can inadvertently stop legitimate http traffic.  This is not the case for “/.” since there is no reason 
I am aware of that “/.” would ever be used by a programmer in a link.  Assuming that your web site programmer does not 
use ".." in his/her code, then there is no reason not to turn on a rule that looks for ".." since such a rule would not 
only prevent the exploits against this specific vulnerability, but also prevent exploits against other vulnerabilities 
in this category that try to perform a directory transversal.  Again, “/.” should never be used by even the sloppiest 
programmer, and therefore it something that can be dropped without much concern.

The second way of addressing this attack is by looking for an indication that the specific vulnerability is being 
exploited.  This approach requires more research and is more precise. The specific vulnerability you mentioned, 
"Superfluous Decoding Vulnerability in IIS" is one in which it is possible to enter the Unicode equivalent of a 
character into the URL (Example: "%5c" = "\") and the web server will decode it twice - the second time without 
checking permissions.  This creates a problem in that you can enter characters that would normally be forbidden such as 
"\" by using their Unicode equivalent.  It is important to note that this method is specific to the vulnerability in 
question, but not to any specific exploit.  Therefore, if you know that certain characters are normally forbidden for 
security reasons, you can look for the Unicode equivalent of those characters in the URL, thereby preventing the 
vulnerability from being exploited.

The normal sequence of events must be taken into account when thinking about this type of solution because one of the 
prerequisites is that the vulnerability must precede the exploit. In most cases the order of events is:
    (1) Vulnerability is discovered
    (2) Patch is written for the vulnerability
    (3) Patch is reverse engineered and an exploit is created against the unpatched vulnerability.

This being the case most of the time, it is possible to have an IPS protect against zero day exploits if the IPS rules 
are written against the vulnerability.  However, if it is a zero day vulnerability that coincides with a zero day 
exploit this solution will not know about the vulnerability, and therefore be powerless to stop the exploit.  The first 
method, however, may stop the attack if the vulnerability is in the same family as an existing vulnerability (such as 
directory transversal).

Our overall position is that the right method for writing a given rule can only be determined on a case by case basis.  
The second method is preferable in most cases since it is more specific and deterministic than the first.  However, we 
do use both methods depending on the vulnerability in question.  There is a third case in which we need to write the 
rule based upon the specific exploit.  This method accounts for about 20% of our database and reflects around 5% of all 
attacks that we protect against.  The reason for the discrepancy is that a rule written against a single vulnerability 
can protect against many exploits.  Writing against the specific exploit is the method of last resort for since the 
exploit needs to exist prior for us to write a rule and stop the attack.

2)  It is important to pre-determine the “vulnerability state” of a system being protected.  The purpose of this 
exercise is to weed out all of the rules that we know are not relevant for a given system since writing rules against 
the behavior can trigger false alarms in some cases, as was mentioned in the first directory transversal explanation.  
If we can pre-determine the vulnerability state of a given system, we can virtually eliminate the false positives and 
false alarms.  This improves not only accuracy, but performance as well.

We felt that the most deterministic way to ascertain the vulnerability state of a given system is by incorporating a 
targeted vulnerability scanner into the IPS.  The purpose of the vulnerability scanner is as stated above: to weed out 
all vulnerabilities that we know are not resident a given system being protected.  Building the following process into 
our product (ipANGEL) allowed us to not only provide proactive intrusion prevention that covers most zero-day exploits, 
but also enables accurate “self-tuning” and reduces the administrative overhead required to manage the product.

Self-tuning process:
   1. Update all detection/prevention rules, vulnerability scanner tests & correlation information
   2. Identify the hosts & services being protect either via administrator input or existing firewall policy
   3. Scan the systems being protected to determine the vulnerability state of each host
   4. Correlate the results of the scan with detection/prevention rules and activate accordingly 
       (there is also a sophisticated rules management portion of ipANGEL where scan results can 
         be overridden and rules can be manually activated and deactivated)
   5. Drop traffic as identified in the active rule based

3)  The time required for the system to learn about its environment varies due to the vulnerability scan with the 
number of hosts being protected and the number of ports being scanned on any given host.  A customer of ours that is an 
ASP and provides primarily http services from 250+ hosts took around 8 hours.  Another customer with only 50 hosts of 
varying services found that their scan took less than 1 hour.  Unfortunately, since every network is different, the 
best answer I can give you is “It depends”.

3a)  If a system gets infected, it can be picked up during the subsequent scan, but in general our solution is not 
designed to react to systems already infected.  Reconfiguration (or re-tuning) happens based upon a time and time 
interval that is set by the administrator – the default is once every 24 hours.  As far as legacy traffic – we don’t 
actually block traffic using a blacklist.  We drop traffic that contains content that is deemed harmful to the systems 
being protected.  The rest of our logic regarding false positives, etc. can be found above.

4)  Since our “learning” phase does not involve statistical anomaly information, we will not incorporate an attack into 
the baseline the way a statistical anomaly based system would.

5)  ipANGEL adapts based upon the environmental changes in the network.  If a new vulnerability is found, the product 
will add the appropriate prevention rules accordingly.  If a vulnerability is patched, the corresponding rule is 
removed.  If the firewall policy is modified to add or delete new services and servers, ipANGEL can (in step 2 of its 
self-tuning process) pull the firewall policy and modify its posture accordingly.  This of course depends on your using 
a firewall that is supported.  If your firewall is not supported, you will need to manually modify the ipANGEL policy 
to reflect the changes being made.

6)  The way ipANGEL is designed, it will report minimum false positives without compromising the ability to detect 
exploits.

7)  Since ipANGEL is not a statistical anomaly based system, we do not have a problem regarding unseen behavior.  The 
overall problem with statistical anomaly solutions is that just because something is an anomaly does not mean that it 
is harmful, nor does being “normal” mean that it is not malicious – which (I think) is the point of your question.

8)  We have incorporated several technologies into ipANGEL including a stateful inspection engine, protocol anomaly 
detection and signatures (as described above)

9)  What we call correlation is the mapping of vulnerabilities found on the host with the detection rules associated 
with that vulnerability.  The combination of writing rules based upon either the vulnerability (or the underlying 
behavior) with automatically tuning based upon the unique vulnerability state of each given host has enabled us to 
create a product that reliably protects vulnerable applications & Operating Systems from being exploited.  We do not 
claim in any way to do heuristics nor do we claim to do statistical anomaly detection since we believe that the only 
way to take affirmative action on an event automatically is to have deterministic proof that the traffic is malicious. 

As with firewalls, we believe IPS needs to be more black and white regarding the approach taken.  While much of the 
work being done regarding anomalous behavior is “cool”, it is not practical unless it can be used in the "real world" 
to prevent attacks.  Believing that traffic is harmful and knowing it is harmful are two different things.  Besides 
which, I have never personally seen a product that operates on "magic foo-foo dust" work.

One final note: ipANGEL is designed to protect operating system vulnerabilities as well as applications that run as 
daemons/servers on a host (such as IIS, Apache, Sendmail, Exchange, Oracle, SQL, etc.).  Because the scanning engine 
needs to be able to determine whether a vulnerability exists or not, the product does not yet incorporate protection 
for client applications such as Internet Explorer which do not respond to unsolicited requests, but rather request data 
from another host.

I hope this answers the questions to your satisfaction.  If not, let me know.

    -Vik

-- 
Vikram Phatak
CTO, Lucid Security
http://www.lucidsecurity.com

ipANGEL -"Best Emerging Technology" - Information Security Magazine
----------------------------------------------------

Hi,

its amazing. Normally each vendor cries "hello me" if he thinks his
product can do anything quite well.
However, it seemed that no vendor is able to speak clearly regarding
"anomaly detection".
Below is the mail I have sent a few days ago.
If still no vendor will answer, what should I think? That the products
are weak in anomaly detection or not even one guy is capable in
answering it?
Is only Toby interested in HOW the products of today really use anomaly
detection? No one else?

christian


Am Mi, den 07.04.2004 schrieb christian graf um 16:07:
Hi all,

there are many "imaginable" ways for a NIPS to detect traffic, which
should be blocked. Patternbased, data-mining-methods (to even guess into
encrypted traffic - see http://www.phrack.org/show.php?p=61&a=9 , 
RFC-anomaly, protocol-based anolmaly (layer 4 flows, new listening
services, new protocols,..), statistical methods, ... Those methods will
most-likely combined with neuronal-networks, back-propagation-networks,
state-machines and at least with some voodoo called heuristic.

My goal is here, do get a feeling for "unknown / zero day" exploits. One
of the best places to stop them is probably the host itself
(lids-project or one of many HIPS , AV-products and even some nice HIDS
with IPS functionality). But here I want the NIPS functionality only.
And I absolutely do not want to start a discussion IDS versus IPS. Those
are two separate functions and can't be replaced against each together.

        My questions is, how the vendors would have detected and blocked
        a prior unseen SINGLE successful attempt which exploits 
        http://www.cert.org/advisories/CA-2001-06.html (Automatic
        Execution of Embedded MIME Types) and a SINGLE successful hack
        using http://www.cert.org/advisories/CA-2001-12.html
        (Superfluous Decoding Vulnerability in IIS) . Both are nimda-related and are just a generic example.

Please do not highlight, that your product would have captured the tftp
(69/UDP) traffic to the IIS-Server NOR that the infected clients will
start scanning for vulnerable IIS-Servers! This traffic is all
worm-related and thats easy to detect anyway.
I do want to checkout how clever the systems may handle an unknown,
single but successful exploit. Most important when (at which step ) the
exploit is detected and stopped (when the backdoor triggers, shellcode
seen, new ports are listening, unseen new traffic, ...)
Even target-based intelligence will not really help in my question, as
I'm talking to the unseen exploit ONLY -and targetbased are all already seen vulnerabilities. Ups, and checking for 
RFC-Compliance wont't help either (hm, is declaring a binary-executable as audio/x-wav against the RFC..)

In the answer I would like to see the following points included:
1) would the system have captured/blocked a "unique, prior unseen" infection by a user who's mail-system was 
rendering the malicious  mail?

1a) you may include the behaviour regarding the directory-traversal-exploit for IIS.

2) if the system could block/detect it, how was the system teached to get aware of the exploits?

3) how long took it to teach the system?

3a) Once the first successful exploit was done (and not blocked), the
system will detect "malicious" traffic or even a newly installed backdoor.
How fast can the system be configured to block further similar hacks?
Is this reconfiguration done automatically?
How can the system be sure that no legacy traffic is blocked automatically?

4) what will happen, if during the teaching-phase the infection will
happen? (So the exploit got learned and maybe classified as normal)

5)How will the anomaly IDS/IPS act during the absolutely normal drift of
any network (new servers, new services, new FW-rules, ..)?

6) As anytime, the system may be tuned to extreme: If measured by
ROC-graphs (as seen and discussed in the papers from C.C. Micael, Anup
Ghosh "Two-state approaches to Program-based anomaly detection" or in
the paper from Stefanie Forrest and Thomas A. Langstaff "A Sense of Self
for Unix Processes". The ROC-graph in general shows the relationship
between the false-positives-rate and the successfully-detected rate.
I'm interested if the system is tuned to report minimum false positives,
how big is the chance to detect an exploit?

7) What data needs the IPS to detect anomaly?
Example: A string-transducer-based system is able to detect
unknown-exploits, even if his data during learning-phase is NOT
complete. The big advantage from this behaviour is that the network may
drift in its behaviour and no relearning needs to be done immediately.
New services and new protocols do not necessarily force an alarm or
blocking (unless they are exploits which will be blocked).
Contrary to this is method is the simple "learning-mode", were a system
is teached to everything which is normal. Anything else from this
learned stuff is "malicious" should be blocked. Whenever the network
changes, chances are good, that the system will start blocking legacy
traffic - which is absolutely bad.
So back to my question - what kind of traffic (exploits or  exploit
free) is needed to teach the system?
How long or how many packets need to be captured in the learning phase?
How will the IPS react to prior unseen traffic / behaviour?

8) What exact technology is used to detect anomaly? Most vendors claim
to have several technologies combined...

9) How will the correlation engine combine the different technologies to
detect anomaly? I here heuristics dancing :)

I don't expect exact numbers for my questions. As every network is
different, the numbers will vary greatly. But I expect to get generic
answers by this mail.

Whoever will answer to this, thanks for taking time.

I would be glad getting no blah blah. I could name some really bad
white-papers regarding anomaly-detection from some vendors which are not
worth the paper they are written on. Some technical answers would be
fine.

And please keep in mind, that I didn't said that one technology is the
better. Thats not the goal here. 

christian



---------------------------------------------------------------------------

---------------------------------------------------------------------------


---------------------------------------------------------------------------

---------------------------------------------------------------------------



---------------------------------------------------------------------------

---------------------------------------------------------------------------


Current thread: