mailing list archives
RE: Security Industry Under Scrutiny: Part 3
From: "Steve W. Manzuik" <steve () entrenchtech com>
Date: Fri, 6 Dec 2002 10:47:47 +0900
-----BEGIN PGP SIGNED MESSAGE-----
First off I want to say that I'm extremely open to
intelligent debate. The key word in that statement being
"intelligent". I am no longer paying attention to personal
attacks, or people who try to use buzz-words and trendy
phrases as the basis for their logic. YOU PEOPLE SUCK!
This was a really good post, I think you touched on some good points that I would like to comment on.
In light of who will access this vuln information we can now
pinpoint a few areas in need of critcal improvement. First
of all is the proof of concept code being released into the
wild via the whitehats website. Removing tools from the net
means that you remove the threat of socially inapt morons
The problem with this is that there will always be someone who feels it is their right (free speech and all that jazz)
to post what they want on their website and there will always be those who write/post exploit coide. How do you
propose that this is prevented?
(who have really good s34rch 3ng1n3 sk1llz ph33r) finding the
information and using it to exploit others. If you don't
like this idea, or if you rely upon the internet to transfer
proof of concept code to someone who is legitmately seeking
to improve security then why not implement another measure,
make this information harder to get. You could try a
'members only' section, or ask people to email you with a
good reason why they should have a copy of the code. Of
course these solutions can be a bit dodgy... It would be
great to hear some suggestions from the community... (yes,
this is a subtle hint).
Unfortunately history has proven that even your trusted sys-admin could be a script kiddie or malicious. How to you
prevent the code from being distributed? Time bombed binaries? What about the inept software vendors who *require*
proof of concept code before they even consider looking at a problem? What about organizations like CERT who has had
proof of concept code mysteriously leak? What about vendors who will only give patches to companies who "donate" or
pay for them? What about the poor one or two man open source project that while they are creating a great free product
don't have the first idea on how to fix a specific issue?
I don't think that any system is foolproof. A persistant person could get himself access to the members only stuff and
convince people to share code. I don't see how you can effectively police this other than killing full disclosure
completely which I don't think is a good option. Yes, full disclosure is flawed but I think it is less flawed than the
In this case, we're saying that an advisory, which contain
information on how to compromise a system, is not the kind of
information that should be available to just *anyone*. I
think there are two main problems with mailing lists:
So you expect mailing list moderators to be the judge of who deserves what? How do you ensure that the moderator of
the mailing list is acting ethically and in good faith? Are you willing to put that trust into the hands of mailing
a) Moderators should be doing more to ensure that the kinds
of advisories that
make it to the list aren't the kind that can be used to
We need to be more careful about what we're telling
society. Some secrets
are better kept from others because they are secrets that
So, as a moderator of a mailing list if I receive an advisory that contains specific exploit information I should edit
that post or retype it without explicit information before letting it through? I have a feeling many of the
contributors to mailing lists would have a big problem with that. What makes any moderator qualified to decide what is
right and what is wrong? Sure, common sense can kick in and play a role but in some cases just a general description
of a problem is enough to make others look at it and find the hole.
As someone who came from an IT background, I liked getting the full details so that I could test ways to mitigate the
risk, especially if a patch wasn't available. My mistrust of vendors at the time also made me test patches. I realize
that this isn't the norm in the IT world but it was for mine.
society who are not bound by any professional ethic to
Unlike software vendors, whitehats, and system admins,
these people can get
away with harbouring malicious intentions because they are
not expected to
act any differently.
Again, history has proven that even so called whitehats and system admins can cross the line and be malicious with
their knowledge. Ever have to go into an organization and "clean up" after a sys-admin was let go from his job? Your
plans work great on the assumption that all "whitehats" and sys-admins are ethical and professional -- most are but it
only takes a few bad apples.
1. I make cute ascii diagrams, doncha think?
2. We need to place better control measures in the following areas:
a) What moderators consider to be "acceptable" advisories
b) On whitehat websites that provide proof of concept code
c) Lists in general, because they are read by evil
ppl and not just good
I would love to hear some ideas on how we control this. We already do a.) to a point at Vulnwatch but I really don't
think it is my place to tell a contributor how much detail to post or to deny a post just because the person did not
work with the vendor. Yes, I try to talk to that person and see why they choose to do what they do but in the end I
still let the advisory through. If someone was to post "Here are directions on how to own XYZ company" of course that
should not make it to the list.
3. The security industry is getting a bad name for itself
because of money-
grabbing "security consultants" and participants who leech
information to be
used for malicious activities. We need to find a way to
remove these kinds
of people from the system.
So are you saying that all security consultants are bad? While I agree that there is a large number of them out there
that have no business collecting paychecks for what they do I don't think that they are all bad. Again, it only takes
a few bad apples. Is it a bad thing as a consultant to help a client setup a firewall properly, or install and teach
them how to use Snort? No, I don't think it is as long as you leave your customer in a more secure state than before
and as long as you leave the customer with knowledge I don't see the problem. You are educating those who need to be
educated to protect their business.
A new industry standard for operating business?
Great, but this will simply create work for the security consultants out there to "help clients get to and maintain the
new standard." Shit we have already seen this in the USA with HIPAA and more recently in Canada with Bill C-6. This
will feed the beasts.
Tighter cyber-laws for websites that seem to tell ppl "how to
So how about we license pen-testers and consultants too? Of course I am being sarcastic here, what would this do to
someone or a group of researchers who create code but keep it for themselves? What happens when that person or group
gets owned and the code is leaked? So the USA passes a law preventing this kind of information -- we already know the
rest of the world won't follow. There will always be somewhere to put your website of how-to information and tools.
So what do we do? Own the sites and wipe their drives? What gives anyone the right to do this or to judge?
Let's start being more responsible with our work. Let's stop
rewarding malicious people with ready-to-go exploits. Let's
stop educating our enemies.
But, (and I think you are asking the same question in this post) how do we educate those who need to be educated and
prevent the enemies from getting the information? There will always be bad people with knowledge and power.
-----BEGIN PGP SIGNATURE-----
Version: PGP 8.0 (Build 294) Beta
-----END PGP SIGNATURE-----
Full-Disclosure - We believe in it.