Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: Difficult Nmap Question from IRC
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Wed, 14 May 2008 21:46:25 +0000

Hash: SHA1

On Wed, 14 May 2008 23:23:27 +0200
" mixter () gmail com" <mixter () gmail com> wrote:

My idea would be to outsource this stuff into a new
"proxy proxy" tool for the nmap toolchain, as it is not directly
a nmap problem, and few people might need it for nmap.

My suggestion would be an external app. that listens
and works as a local socks proxy, and has a config file
with proxies to forward all incoming sessions (socks/http),
on every incoming connection (from a tool specifying a
single host list, e.g. nmap) go through the list and attempt
connections and relay the one back to the user that actually
does work. But it's just a random idea, the whole thing is
not strictly nmap related, yet maybe a nice to have tool.

The trouble with a proxy for port scanning is that unless it is a very
low-level proxy, SYN scans won't work anymore.  Also, the latency
estimates will be severely skewed.  It would be better to hack together
a connect() based port scanning script to handle the job than to use a
proxy like this.

Also, Unicornscan does handle this particular usage scenario:

# unicornscan -R 3 -r 100 -I -v

The internal Nmap code isn't built to handle per-host port lists and I
doubt Fyodor is going to want to tweak such a core element of Nmap to
support these scans.

I've though about using a trade-off for these cases.

Suppose you have N ports and P hosts (one port per host).  The cost of
starting Nmap is some fixed cost I.

If you scan all ports on all hosts in a single scan you get:

Time = N * P + 1 * I

If you go the other extreme and you scan each port on each host
individually you get:

Time = P + N * I

Now, the startup cost I for Nmap is going to vary from scanning machine
to scanning machine but is relatively fixed for your individual
computer.  Neither of these extremes is very good because on one end
you are scanning many redundant ports and on the other you are starting
many Nmaps.

Depending on the value of I though, it may be advantageous to split the
N hosts into M groups who's ports for each group are only the needed
ports for that group to average out costs:

Time = (N * P) / M + (M * I)

To minimize your time, find for what values of N and P, N * P = I.

If this is a one-off scan never to be repeated then it doesn't matter
much, just pick one of the extremes and go for it.  If this is
something you do all the time it should be easy to hack together a
perl/python/other script to make a few measurements, a list of
host:port pairs and do the splitting and scanning for you.


PS:  This is some darn rough math and somewhate simplifies the
situation.  I think it illustrates the point without getting too
complicated and overly pedantic.

Version: GnuPG v2.0.9 (GNU/Linux)


Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]