Dailydave mailing list archives

RE: Re: Hacking: As American as Apple Cider


From: "Kyle Quest" <Kyle.Quest () networkengines com>
Date: Sat, 10 Sep 2005 01:27:08 -0400


You raise a really interesting point regarding my apparent
utopianism!! Computer security is a field that _demands_ utopianism
because, by its nature, it's a zero sum game. If the bad guys win, I lose,
and vice-versa. If the bad guys find _one_ hole, they win - it's a game of
absolutes, a game of 100%, a game for utopians.

The only problem with utopianism is that it's unachievable.
It's the ideal that we all would like to have... including me :-)
However, in life... it's not just good or bad. There are so many
shades in between. With computer security as well as other kinds
of security the world doesn't go for the 100% solution. As you know,
businesses perform risk analysis (etc) to evaluate what the risks
are, what risks are acceptable and what risks are unacceptable and
based on that the build a solution to reduce the risk to an acceptable
level. That acceptable level is far from 100% security. In many cases
achieving 100% or getting really close to it would cost more that
the value of the assets these security measures are meant to protect.

What's the alternative? Embrace mediocrity? Run Windows on
your mission critical servers, let your sales reps all use Outlook, run
your firewall with a "Default Permit" rule, keep your patches up-to-date
and shrug and say, "well, everyone ELSE sucks TOO" when you wake
up one morning and discover that you suck? Maybe you're comfortable
with that, but I am not!!

Definitely not... I agree that with you on that one. 


I said that "enumerating badness" was a dumb idea. I said that "enumerating
goodness" was a smart idea. I did NOT say it's "the silver bullet"  

This is a great and a tricky think about people. Different people may interpret
the same thing in different way. The way I interpreted was... "enumerating badness"
is bad, "enumerating goodness" is good and it should replace the old "enumerating badness"
approach because it will solve the problems of the old approach... and it will be
effective where the old approach wasn't. That's where the silver bullet analogy comes in.


It would be great
if it was true, but it's not. It's a great
approach... and it could be the idea
we should strive to achieve, but it's
not achievable... for a number of reasons.

Again, in my article I didn't hold "Enumerating Goodness" up to be
some kind of holy ideal. It's just better than "Enumerating Badness."

It's definitely better. Unfortunately it doesn't work in all cases,
but when it does, I'd say, it's 100 times more effective :-)


Real example: I did a thing for an Ecommerce site where I managed,
after considerable back-and-forth to talk them into just putting a
prefix of
/transact/
in front of all the URLs that had anything to do with a transaction.
Suddenly they were able to put code (not that anyone writes code anymore)
in their CGI to do the equivalent of:
if(strncmp(url,"http://www.bank.com/transact/",30)) {
       // 404
Is it perfect? Hell no. Does it knock down everything except targeted attacks
for the cost of one call to strcmp? Hell yes. What's hard about that??

This is a great example when the white listing approach was doable.
It doesn't always work that way. It's not always possible to "bend the whole world"
around security requirements. 

So you can extend the idea a bit farther. If you tell me that "All URLS look
like:"
http://www.bank.com/images/.*\.gif
http://www.bank.com/images/.\.jpg
http://www.bank.com/transact/jsessionid=[0-9A-Z]{12}
..or whatever - is that valuable? Hell yes. You can stick a copy of Squid
cache in front of your site and make a whole lotta rosie disappear in a
heartbeat. Then you make the stuff that matches nothing go to a 404
server and look at the 404 server logs every so often and see if something
looks weird.

NetContinuum and a few other companies try to use the white listing approach.
Whitelisting is great when you can get very close to the infrastructure you
are trying to protect. If you are a service provider that doesn't have
direct access to the infrastructure it's trying to protect, then it's 
much harder and in some cases impossible. Once again, there are many... many
times it works great. I just wanted to point out that there are many cases
it doesn't. Again, it's a case of a slightly different interpretation.
As long as we agree that it's not a silver bullet, we'll be all set...

Are you saying that because something's not a 'silver bullet' that it's
a bad idea?? Because that's not very good thinking. I'm not talking about
something that's world-shakingly hard, here.

Not at all... Defense in depth. Proper architecture. Using the best
technologies on case by case bases. Sometimes solutions can
be surprisingly easy... just like you said.

"Default deny" isn't going to work for a service
provider. I never said it would. 

Ahhh... but by the way you structured your write up
it might be indirectly implied. You say one thing
is bad another thing is good and look here it worked
in this example, which is automatically saying:
"it worked here and it will work everywhere else".


And if he is aware
of the nature of this biz app network traffic,
how would he be able to deploy a "Default Deny"
system without knowing the low level details,
which are probably known only to a couple of
developers who are long gone?

So are you saying "Because Marcus' suggestions
do not retroactively cure past mistakes, they are
no good?"

I don't mean that... This was just an example
of an environment where a whitelist based system
will most likely not be adopted. They'll put it
on their network configuring it to the best
of their knowledge. They turn it on and all
of a sudden a lot of things break. After lots
of tries to compensate, they finally give up
and pass on buying it.


So what are you saying? Because people make mistakes
we shouldn't even TRY to get it right? Because people
make mistakes we should just roll over and give up?

Not at all. Trying to get it right is the way to go.
But you weren't talking about just trying. You were
implying that it's doable... and people just don't do it
because they don't know any better. They try. Is it 
good enough? No. Do we need to try better to get it 
right? Yes.


It's a phenomenon I've been tracking since its
beginning. Oversimplifying it is not something I am
trying to do - I've written and talked about it I don't know
how many times (too many) in the last 12 years.
Anyhow... It's a nuanced and subtle issue and it's
not one I am trying to dismiss lightly. If you want a
more detailed series of arguments about why I think
vulnerability disclosure and so forth is bad, the article
you're reacting to isn't it. I'd suggest you go back to my
CSI 1998 keynote or some of my writings in USENIX
;Login: from around 2000 when I was doing a regular
column there.

We'll have to agree to disagree on vulnerability disclosure.
I do understand where you are coming from though... and you
have some valid points. There's a couple of counter points
that the other side has as well. 


"Wouldn't it be more sensible to learn how to design 
security systems that are hack-proof than to learn 
how to identify security systems that are dumb?"

It sure would... but that's not commercially
possible.

Aha! There. You've finally said something I can
agree with!!! :)  Can we be friends, now?

Products of reasonable complexity 
would take too long to make and would be
prohibitively expensive.

So, a piece of freeware like Qmail is prohibitively expensive
compared to a piece of freeware like sendmail?? No, wait,
I got that wrong... An operating system that pays attention
to security design issues like OpenBSD is going to be
prohibitively expensive at $0 license cost compared to
WindowsXP at $150 license cost? No... wait... I must
have that wrong, too. Wait. Do you know WHAT you're
talking about?

It's not that simple. There are many things to consider here.
Without having commercial software to begin with, there's
a good possibility that the free software wouldn't
exist in the form it does now. Analyzing just this
would take pages and pages...


Ah. Suddenly everythnig comes clear. You have a
degree in software engineering. No wonder you think
you know everything about everything but appear to
believe that "accept mediocrity"  and "accept the fact
that all code crashes" are your philosophical touch-stones.

It's not about accepting. It's about being aware and
being prepared. I'd like to see a person who has
been able to write perfect code (let's assume for
now that the architecture was perfect) all the time.
Can you say that in all of the code you've written you've
never had any bugs?

Kyle

P.S.
Ok, I'm done polluting the mailing list with this junk :-)
Marcus and I will keep this offline hopefully.




Current thread: