Dailydave mailing list archives

Re: Re: Hacking: As American as Apple Cider


From: Barrie Dempster <barrie () reboot-robot net>
Date: Wed, 14 Sep 2005 09:52:41 +0100

On Mon, 2005-09-12 at 01:16 +0100, Dinis Cruz wrote: 
Some comments about "The Six Dumbest Ideas in Computer Security" written 
by Marcus J. Ranum on 
http://www.ranum.com/security/computer_security/editorials/dumb/

1) "Default Permit"

Totally agree. This is the opposite of 'Secure in Deployment' or 
'Locked-Down State' which is all about reducing the attack perimeter.

Me too.

He also touches on a point which I think is much bigger than this, the 
fact that most security decisions made today are still binary in nature: 
"Is this type of traffic allowed to go through the firewall?", "Should 
this application be allowed to execute on my computer will Full 
Privileges?", "Should I open or not this attachment?".

The problem with this 'binary' approach to security is that it moves the 
responsibility of the exploitation to the person (or application) who 
said YES to those questions. The issue here lies in (what is usually 
called) 'Defense in Depth' where multiple layers of security protect the 
assets. At the moment most assets are only protected by one (or two) layers.

Something Marcus could have elaborated on, he made this point in a
subtle and round about way, it's more eloquently explained here.


Default Deny (or 'Secure in Deployment' or 'Locked-Down State')  is a 
very good idea but seldom executed by the simple fact that the owners of 
most semi-complex systems (i.e. software) don't know (in detail) the 
resources needed to execute it and what type of traffic is valid. In 
another words, how can you 'only' allow what you know 'is good and valid 
activity' when you don't know what that activity looks like?

Yep, his point was that it's just as much work to figure this out as it
is to try to keep up with the malware, in most cases it's less work.

2) "Enumerating Badness":

Agree with this, but like yourself would have preferred whitelisting and
blacklisting as the terms ;-)

On the last Owasp London conference I did a presentation called 'the Fog 
of Software' which talks about these issues  and were I defended  that  
'there must be a limit to the amount of complexity that we can throw to 
complex-but-stable systems before serious disruption occurs". I found  
very worrying that most people who I have talk and presented this ideas 
to, have responded with: "I don't think that that disruption will happen 
because 1) It hasn't happened so far and 2) the affected entities will 
create solutions that will prevent it".


Blinded by the fog you were describing.

Here Marcus efforts and 'strong position' deserves maximum points and 
should be supported, there are not enough people out there (in a 
position to write something that is read by a large audience) who says 
that the 'king in naked', that our industry is not doing enough,  and 
that in most cases we are not really solving the real problems.

Yeh but we should still criticise it for quality - ESPECIALLY if it's
widely read.

3) "Penetrate and Patch"

I agree with Marcus that the ultimate goal should be to design systems 
which are secure by design  (by default, in deployment) and that are 
designed with flaw-handling in mind (since this is basically the basis 
of good engineering practices).

I don't think Marcus explained enough of the causes behind this, ie..
it's generally profitable to release software but not generally
profitable to develop it. Development is seen as a cost to be minimised
which I think is the most significant factor in this.


4) "Hacking is Cool"

Of course it is, that's why they make all those great movies about it.


A criminal activity is a criminal activity and it doesn't matter if it 
is done Online, Offline, under-water or upside-down. Under most 
circumstances I don't agree with it, but the world is made of shades of 
gray and sometimes the definition of what is a 'criminal activity' is 
flawed.


Absolutely.


What I find a bit worrying in Marcus comments is that it seems that he 
is almost defending that 'Information is Dangerous' and we shouldn't be 
allowing the publishing of (for example) books that describe 
Vulnerability Exploitation techniques. This is very close to promoting 
censorship, which is something very dangerous.

Indeed, asserting that "hacking's utility to society is less than the
damage it causes." is *IMO* completely nuts. Although this is probably
because the many meanings of the word hacking. 

Criminal activity's utility to society is less than the damage it
causes.

Understanding Security's utility to society is much greater than the
damage it causes.

Marcus has  a few valid points but in his, self admitted, attempt to be
slightly humorous he left the comments far too open to interpretation in
some ares and in others worded them so badly as to be the cause of
flawed interpretation.


I also think that Marcus uses a very narrow definition of 'learning 
exploits', since in this post (and in others) he defends that learning 
them is not very important. Well, here I think that again we have 
different definitions of what is 'learning to exploit'.

Most learning is harmless, it's what is done with the knowledge gained
that can be dangerous.

I find the third reich interesting to study, it has no real benefit to
me as I won't ever do anything with the knowledge, but I should be
allowed to gain that knowledge regardless. I certainly shouldn't be
considered a nazi for watching the history channel!

If he is talking about the ability to grab a bit of code (or tool) that 
somebody else wrote and execute it against a target system, then yes I 
agree that this has limited value.


The problem with the article is that you were left unsure of this point,
if someone with a hacking/security background was left with this sort of
confusion the average /.er is going to have a hard time making sense of
it.




5) "Educating Users"

Here I totally agree, and I think that Marcus hit the 'nail on the
head' 
when he said ".... If it was going to work, it would have worked by
now...."

I don't!

This seems to imply that we have tried user education and it didn't
work. Don't know about your school but at mine there was absolutely zero
education on digital security and the physical security education was
lacking too.

Security education is also still pretty new in most organisations I have
seen and it's usually taken the same way the fire drill is "blah blah
don't tell people your passswords etc etc.. so what did we all do on
Friday - we have this room booked for an hour and I hate all this crappy
video stuff we are forced to watch in our 'required viewing for
employees' guide". 

Education hasn't served it's purpose yet - previously (and currently)
people did not understand the basics of the system nevermind the basics
of security.

Contrast my training to my sons, to see my point.

I didn't see a computer until I was about 8 and it was my own - the
school had one computer and I wasn't allowed to use it at such a young
age, it was too complicated for infants. So I was self taught in
computer skills, even when the Internet was becoming more widespread
it's use was very limited as it wasn't taken seriously until after I
left school.

I took my son to the nursery the other day (sadly a rarity since I'm
working). When I got there he dragged me over to their PC, clicked the
Cbeebies link in the favourites bar at the side and started playing his
favourite colours and shapes game. He is 3 year old, can't read yet, has
trouble counting past 20 but has basic mouse skills and keyboard skills
- education has just begun, it isn't over yet.

When this generation comes through security education will have more of
a chance when words like "phishing" will be in the dictionary. Being
focused on this industry puts you ahead of the curve in learning, but it
doesn't mean that the general public won't catch up to a more useful
level. The responsibility for this also falls on the vendors to an
extent as it is their systems that the users require the guidance in.
You are more likely to be forced to read an EULA to the very bottom than
to be forced through a quick security tutorial during/after software
installation.

Bringing on a parenting analogy: "The reason most parents act 
immediately when they baby cries (especially first time mums) is that it 
is much harder (for the parent) to NOT REACT for the first 10 seconds 
and assess the situation. When you REACT immediately you are making the 
EASY decision, when you DON'T REACT immediately (which is what most 
fathers tend to do btw) you are actually making the HARD decision but a 
large number of people will criticize you for that (note for the 
non-parents in this list: 99.9% of all baby's cries are NOT emergencies 
and don't need immediate reaction (and on the 0,1% cases, the cry in 
completely different and 100% of the parents will RUN))


That's an excellent analogy and is one I'll be using in the future. I
couldn't agree more.


B) "Create Simple and Open solutions" - The reason why our 
systems/technologies are so hard (if not impossible) to defend, are 
because they are too complex and closed. There are too many 
interconnected components whose individual parts are not published and 
the side-effects of this connections is not known/understood.

Indeed, keep it simple and educate the user on the simple system,
although complexity sells! The problem is economics mainly.

Software should spend more time in testing and development - but this
costs.

Users need more training - but this costs.

Checking needs double checking - but this costs.

The problem with these costs as opposed to the cost of patching is that
you can hide the cost of patching for longer, the cost of development
and training can more easily be quantified in advance.


 C) "Companies should be forced to disclosure how many vulnerabilities 
they KNOW they product/infrastructure/system has" - Note that I am not 
saying that they should publish the technical details of how to exploit 
them. They should be forced to do something like the 'eEye upcoming 
advisories' (http://www.eeye.com/html/research/upcoming/). This way the 
end-clients would be able to make much more informed decisions and those 
companies would not be able to do like they do today which is to only 
acknowledge security vulnerabilities when they are either a) externally 
discovered, b) are being actively exploited or c) are so bad that they 
have to issue a patch ASAP and acknowledge the problem

Ssssh! don't say that Cisco are watching - you have been added to the
list.

 D) "Create evaluation standards for the security tools we have today" - 
We need to be able to have a pragmatic, logical and effective way to 
compare: Operating Systems, Firewalls, Anti-Virus, Web Application 
Firewalls, Web Application Vulnerability Scanners, IDS, etc...

Standards? on the Internet? what an insane idea.

I'd love to see something like this but it needs rich vendor support and
most of the rich vendors rely on their ability to sell crap products and
services. They don't want this fix.


 E) "Create tools (and services) that help in the creating of secure 
run-time environments (with Default-Deny and Enumerating goodnesss)". 
With today's complex systems we need help to process the information and 
to simplify that complexly. For example a tool that would remove from 
Windows all files that are not required to execute a particular function 
(if a server is only acting as a web server why does it need to have all 
the other functionality in there?)

Id call that tool "the installer" IMO you should have to install every
single component you need yourself and when it is installed it should be
limited in use until you set it up correctly. This should be accompanied
with excellent documentation.

We can accept whitelisting for almost everything but when it comes to
the OS components we are content to "lock down". This is something that
is hard to get by the user and has problems of it's own. But from a
Utopian security focus, it's an excellent defensive measure. There's no
reason why XP *Professional* can't be setup like this but XP Home be a
little more easy to use and open.

 F) "Slow down the creation of new products/features/functionality and 
focus on getting the ones that we have right" - What we need today is to 
have a secure, reliable, robust, non-exploitable and 
'no-patches-required' version of what we have today. We don't need a new 
complex system which will bring more vulnerabilities and who nobody will 
really understand (when we already have solutions today that we almost 
understand)


Hell yeh, but will the customers pay for this?

 G) "Use the power of the buyers to force the solution-providers to be 
open about their product's and to stop playing the 'lock-in' game"

The buyers don't generally know what they want. Most of them want a
security appliance, because it makes sense to them.

 H) "Segment internal networks" - It is crazy the fact that most 
networks are not segmented and once a malicious attacker has access to 
one computer in the internal network, it is able to directly attack 
critical resources like: Database servers, Active Directories, 
SQL/Oracle databases, other workstations, etc...

Yeh, the cost factor comes in here too.

 I) "Source-Code disclosure" - Without wanting to enter into the whole 
open source debate, all that I would like to say is that not disclosing 
the source code makes developers rely on 'Security by obscurity' and 
makes it very difficult for the good guys to identify malicious code

Excellent point to end on, get the FOSS advocating done as a finale!

-- 
With Regards..
Barrie Dempster (zeedo) - Fortiter et Strenue

"He who hingeth aboot, geteth hee-haw" Victor - Still Game

blog:  http://reboot-robot.net
sites: http://www.bsrf.org.uk - http://www.security-forums.com
ca:    https://www.cacert.org/index.php?id=3

Attachment: smime.p7s
Description:


Current thread: