Interesting People mailing list archives

IP: Re:A bit more of why I sent out the MS message


From: David Farber <farber () cis upenn edu>
Date: Sat, 28 Aug 1999 20:30:00 -0400



To: farber () cis upenn edu
Subject: Re: IP: A bit more of why I sent out the MS message
From: "Perry E. Metzger" <perry () piermont com>
Date: 28 Aug 1999 10:49:08 -0400
Lines: 124


David Farber <farber () cis upenn edu> writes:
Sorry but if simple errors like that get through , just how secure
are both the web sites of major players and more interesting their
firewalls.

As I mentioned before, what impact on the market would a bit of well
designed news planted at say the NY Times or WSJ site. What would be
the impact of a penetration into a major software vendors site. I
just wonder how tight these guys really are and what damage one could
cause if they are sloppy

Dear Dave;

Feel free to forward this if you wish.

As a computer security consultant, I can assure you that there *are*
probably plenty of software vendors with sites and indeed entire
networks (including master repositories for their software) that are
likely vulnerable to attack. This is not a guess -- this is based on
personal observation.

The problem in many organizations is that "security is silent". To
someone in management, a secure corporate network and an insecure one
look very similar until a break-in occurs, just as on the wire,
packets encrypted with 40 bit RC4 and with 3DES don't look very
different.

Often, if no one has yet detected problems because of lax security, it
will simply be assumed that security is adequate, especially if "real
security" would involve spending money, causing inconvenience, or,
worst of all, stepping on toes. Internal company politics are often an
organization's worst cause of security problems.

When problems occur, of course, management decides that action must be
taken. The response to the "security is silent" problem often results
in companies implementing "loud noise" -- heavily supporting internal
security departments that produce the appearance of security by
imposing fascistic internal security controls that gravely
inconvenience employees.  This is not to say, of course, that fascist
security controls help. Like 19th century quack cures, they induce
lots of confidence in the patient, but don't do much for the disease.

I've seen many organizations in which extremely tight-seeming controls
were put into place, largely to placate incompetent security managers
and senior management, with the basic effect that security was not
increased but actually decreased as employees found ways around stupid
restrictions imposed without any thought to what threat they
helped. However, management felt better.

Requiring forms be filled out before employees can receive email or
use the web, having people change passwords every week or two on
networks where passwords travel in the clear, putting in firewalls
that are inconvenient for insiders but permit outsiders to get to
machines on the inside network anyway "because they need to run this
web application", etc., etc., is a good way to make a security
department look active, while the actual threats go completely
unresolved. Sad to say, security departments often decrease both
productivity and security.

However, in case of trouble, management gets to point to such security
departments whenever anything goes wrong to say "we could have done no
better", just as they get to cover their buttocks with security
assessments made by utterly worthless security consultants from large,
important seeming companies. A certain very famous east coast
newspaper that was highly embarrassed about a year ago got to say in
their post-incident PR that they'd passed an audit done by a large
famous firm with flying colors. What no one said, of course, was that
almost all such assessments seem to be conducted by junior people who
operate with checklists instead of with actual understanding of what
they are auditing.

The "bad auditors" problem is especially troubling from the big
accounting firm consulting departments, by the way. These firms
capitalize on a highly polished reputation, but almost all of them
send kids fresh out of their "boot camp" programs in to assess whether
a network is secure, with someone senior "supervising" them -- which
in practice means "handing them checklists". Such kids are usually
well meaning, but you can intentionally set up holes in a network the
size of elephants and they won't notice them because they aren't on
the checklist. I know -- I've intentionally done this to see whether
or not the auditors could be trusted, and they usually fail. If a
scanner or other security check program the auditors bought has a flaw
on its list, they'll find it, but mis-architecture or even simple but
unusual situations will fly right over their heads.

[I'm sure I'll get flamed by a few of those guys. After all, they have
a very profitable reputation to uphold, even if it is a lovely castle
built from sand.]

The "hiding behind famous products" problem is equally bad. Firms will
now often say, proudly, that they are using the firewalls/security
products/etc. of famous vendors X and Y, as though this meant
anything. Just because you have a great door on your safe doesn't mean
it fixes the paper walls around the vault.

I once pointed out five minutes into an audit that a client was
managing such a Famous Firewall Product via telnet with clear-text
passwords passing over the network where the web servers that were
likely to be broken in to were located. It was stunningly obvious that
these people had built a large, steel safe door and then posted the
combination in a paper envelope and taped it to the outside of the
door. It hadn't even occurred to them to worry about such matters,
because they were using Famous Firewall Product. Luckily, my
observation caused some changes in that case, but often such reports
are ignored.

Don't get me wrong. The security situation at a few firms is very,
very good. They're typically places with low politics, high amounts of
technical clue among their systems administrative staff, and a desire
for more than just appearances to be followed. Most firms, however,
have become pathetic, relying on appearances of security and purchased
security products to act as, to steal a phrase from Jeff Schiller,
"Magic Security Pixie Dust."

So, to answer Dave's question from his original posting, I'm certain
it is more a question of when rather than if we see a major software
vendor's products contaminated, possibly from the source on down, by
intruders on their network. I've heard rumors of this already having
happened, but I have no way to check them. However, the problem is
almost certain to shift from possibility to Technicolor reality some
time in the near future.

Perry Metzger


Current thread: