Home page logo
/

fulldisclosure logo Full Disclosure mailing list archives

Re: Bigger burger roll needed
From: James Tucker <jftucker () gmail com>
Date: Thu, 13 Oct 2005 03:19:13 +0100

No, but the situations I'm talking about are *not* those types of
situations.  There's no reason why input coming in from a web server
should not be properly bounds checked.

As you suggest later on, maybe I wasn't reading clearly. I thought we
were discussing BSOD crashes, which are typically caused by ring 0
code, or dependant hardware. I'm not so much of a moron to be
suggesting that you allow arbritrary forreign data to flow unchecked.

You are correct if your response is about to be "but i'm not talking
about ring 0 code".

We could always trust all input... but the fact of the matter is that...
life is never that simple.

input regarding drivers should be within known ranges. outside of
that, I would strongly suggest the driver is incomplete.

Data stream tracking is one method of protecting against both of the
above, however it is very costly in processing, aswell as requiring
significantly sized validation tables to be built. This is not used in
any public domain kernels I have seen.

Actually, I'm talking about situations where we know what causes
specific crashes.  It's very easy to find these situations as they're
included in security disclosures.

Many of the crashes which get down to the kernel only manage to do so
because they actually target kernel code. Yes, such input should not
get there, thus my reference to design architecture. This however
leads directly back to speed of handling, which is why things are
changing these days and not before.

There is a simple way to remove remote vulnerabilities from crashing a
kernel, never let networking code touch kernel code. Clearly however,
this is not how *nix, or nt are built; i re-iterate again, that this
is with good reason.

Obviously, it's not possible to trace every crash and fringe situations
do occur.  That doesn't change the fact that MS is handling their
procedures poorly and they're making sloppy mistakes.  Many other
companies/groups make sloppy mistakes as well.  I didn't see anyone in
this thread claiming that MS was the only company that did this... just
that they were the most exposed one.

I was refering more to the fact that most apps which cause these kind
of vulnerabilities are not following standard well documented
procedures and architectures (several of the vulnerabilities you are
probably thinking of existed prior to the new documentation and
procedures however). Yes, that is sloppy coding, but that is becoming
more and more rare in just-released code from many of the giants. I
would say they are learning their lesson.

In my real experience, people who try to point out how they have real
experience and others don't...

i think you read something between the lines there.

Unless you have a memory management flaw where the partitioning of the
memory is compromised.  Such is the situation in Windows 9x... as I
stated in the thread, it's unlikely that that type of situation would
occur in a Windows NT style environment, but you still get other forms
of crashes for a number of different reasons.

9x has so many well known vulnerabilities and faults now, it's hardly
worth discussing. Yes memory corruption was always an issue there, but
by the nature of the OS, of course it was. With regard to it's
architecture, you needed to trust almost every application on the
system to ensure stability.

With NT, you may want to be a little more specific. A few years ago a
client had trouble with some hp printer drivers running accross 2000
servers and clients. In a later driver update which fixed the issue, a
new control code parser was implemented (our specific issue) and the
whole driver was lifted away from kernel mode (the more general
architecture issue). The latter prevented the possiblity of a further
BSOD by printer driver for anything that was redeveloped under that
branch. Legacy code base re-development had lead to poor driver
architecture, and this was a financially based business decision,
obviously.

A BSOD isn't the only type of software crash and it's silly to only talk
about BSODs when you're talking about customer satisfaction.

Maybe, but it depends what is being discussed. An application from a
3rd party (defined here as anything other than the kernel and it's
dependencies) can crash on it's own, and provided the developer has
done what they were told, the kernel will stay up. To talk about crash
prevention further than this is to suggest then that the OS should
prevent apps from crashing. With regard to the operating system and
it's dependant services, yes, they should be entirely re-loadable,
maybe... Example: lsass is started from a specifically defined
location during system boot, however if it were to be "restarted"
after a crash, with a kernel still up (but incompletely now), you have
little method of tracking what you are loading (the kernel is blind to
certain events). It can be decided in this state therefore, that the
system should be restarted as per general good security procedures
amoung many other reasons (such as dead handles, for one example). As
you may note, this is exactly what the operating system attempts to
do, meanwhile it gives the user time to close applications (arguably
maybe not enough for most users).

Try reading.  It's a beautiful thing.

mm hmm.

(ps. I'm assuming you meant to send this to the list from your tone.
Or, maybe you got embarassed last minute and decided only to send it to
me.  Either way, it's going to the list.)

No, I made a mistake, thank you.
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]
AlienVault