Penetration Testing mailing list archives

Re: [PEN-TEST] Scanning Web Proxy -- Preliminary Concept


From: Philip Stoev <philip () STOEV ORG>
Date: Sat, 16 Dec 2000 00:19:39 +0200

Hello all,

Thanks for the constructive feedback. A bunch of tools were pointed to me
and I will be examining those closely.

However, it seems that most of the tools that are available have been made
for the purpose of testing a single site, and this site is more or less
expected to be under tester's control, that is, prepared to take hit after
hit.

I have the following ideas that are different from this approach:

1. The proxy server is not to be used by a single tester, but rather by a
group of testers, that each visit as many sites as possible. This allows
more sites to be at least somewhat analysed for security vulnerabilities.

Imagine that you are a security conscious individual, who likes to browse
around in his free time. From time to time, you sense that something in the
site you just visited accidentally is wrong and decide to view the HTML
source and you see your password in plaintext there. Of course, you will do
that only on around one site or so per day, while you visit hundreds of
sites.

What I want to accomplish is having a proxy server that will view the HTML
source and look for that and other vulnerabilities for _each_ and _every_
site you visit, as if you are giving each one a piece of your
security-conscious mind. Once the web server sounds an alarm, you will know
that you have just visited a badly-coded site and it is justified to stop
and view the HTML source yourself and see what is going on.

Now imagine that you are not the only one browsing around, but rather it is
you and a bunch of your friends. During the day, you all could visit
hundreds of sites both together and separately, thus dramatically increasing
the chance that somebody will visit a site containing a security
vulnerability.

In brief, instead of you alone hitting your own server to death with
millions of requests, you and your friends will be hitting other peoples'
servers (lightly), and then hit to death only those of them that are
promising, that is, those that produce warinings from the proxy server
during normal browsing.

2. Most of the products I had the chance to read about have a functionality
that enables them to do mass requesting on their own, such as trying to
brute-force a cookie starting from 1 and ending with 9999999. I think that
this may be a good approach, however only if you do it subequently and
explicitly, and not as a portion of your everyday browsing through the
scanning proxy server. Instead, I would like the scanning proxy server to
limit the requests it makes on its own to about five or so per each request
the operator makes. For a simple POST of a login form, those additional
requests will be:

1) A request with a wrong username and 2) A request with a wrong password -
if the replies of 1 and 2 differ, this means the web site allows us to guess
a username first, and then guess the password.
3) A request with a ' or " or ; in the username - If the reply differs from
1 and 2, this means that we may have introduced an error in the SQL
statement verifying the login, which in turn may allow us to insert our own
SQL code in it.

If another request-reply pair from the same login form has been recorded in
the past, it is to be compared to the present one in order to discover any
similarities/differences. Additionally, the old request is to be performed
again so that we can also compare the reply from the old request then vs.
the reply from the old request now vs. the reply from the new request now.
This comparison will allow the proxy server to separate timestamp-related
from login-related cookies. Also, it will detect which portions of the
cookies are not variable, that is, marking even long cookies as easily
brute-forcable if they appear to be static enough.

It will be even better if the proxy server has the ability to proxy two
login requests with a single username, but with two different, valid
passwords. This will enable it to separate username-related cookies from
password-related ones.

In brief, the purpose is to extract as much information from as little
additional requests as possible. There will be no hard-hitting that will
make e-commerce sites angry. Once you sense that something is wrong, you are
free to hit as much as you like to.

The above-mentioned requests are to be performed at each login form we
encounter, without the need to define rules and stuff for each one
separately.


Current thread: