Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: [Bug]? -iR <num_hosts> on windows XP generates duplicate targets
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Wed, 30 Apr 2008 21:44:42 +0000

Hash: SHA1

Hi Jah; this patch looks good, I've applied it.  The rest of my
comments are inline.

On Thu, 24 Apr 2008 23:49:19 +0100
jah <jah () zadkiel plus com> wrote:

On 24/04/2008 04:56, Fyodor wrote:
It may be OK that windows RAND_MAX is 32K (15 bits), because we only
use 16 bits per call anyway:

  for(i=0; i < sizeof(bytebuf) / sizeof(short); i++) {
    iptr = (short *) ((char *)bytebuf + i * sizeof(short));
    *iptr = rand();

Maybe we should only be doing one byte at a time, since the high bit
of every 2nd byte we generate may always be zero on Windows.  Anyone
want to test this and make a patch?  The patch could check RAND_MAX
and use that to decide the number of bytes to user per call.
It certainly is the case that the second byte returned by a call to
rand() never has a value of more than 127!!  Quite shocking.

Shocking is how low quality rand() still is after so many releases of
Visual Studio.

I've made an attempt at the change you suggested Fyodor and attached
the patch.
Here's some test results with the patch applied:

$ for i in 100 200 400 600 800 1200 1600 3200 6400 10000 100000
500000 ; do COUNT=`nmap -n -sL -iR $i | egrep '^Host' | sort -u |wc
-l`; echo $i $COUNT; done
100 100
200 200
400 400
600 600
800 800
1200 1200
1600 1600
3200 3199
6400 6400
10000 9999
100000 99901
500000 495727

I /think/ the repeated IPs you see here in these tests are not from the
short period of rand() but that PRNGs based on a LCG have cycles in the
lower bits that are shorter than the whole period.  This would explain
why even when you get duplicate in the 3200 test, you don't get that
many more in the 500k test.  Even 500k IPs is too short to see the full
rand() period.

Incidentally, it is these short cycles in the lower bits that often
defeat poorly implemented LCGs.  Storm (the worm) is a recent example
of this LCG cycle in the lower bits problem.

One thought that occurred to me is whether it might be a more
economical use of our random numbers if, instead of throwing away 4
bytes each time a reserved IP address is generated, we drop the first
byte, shift the remaining three along and fetch a single byte to
complete a new IP address.  I'm not sure whether this would have any
positive or negative effects on either the randomness or in
performance.  It might be worth looking into though?



Testing suggests that while RAND_MAX is 2^15, the rand() internal state
is actually 2^24.  Because the period is 16.7M bytes, we can make 16.7M
IPs out of that stream.  It turns out that shifting out a byte when the
IP is "bad" and just consuming 1 more rather than consuming 4 more will
not allow us to make any more IPs but would reduce the number of IPs
needed before all possible IPs are exhausted.  I'm being a little
hand-wavy here but the details of why this is the case are not that

The end result of all of this is that Nmap can only make 11215879
unique IPs and that if you pass a number greater than about 64M to -iR
you should be able to generate all of them.  I have generated a list of
100M several times and confirmed that it always produces the same
11215879 unique IPs.

So what this really gets down to is the need to stop using rand() on
Windows.  OpenSSL provides an excellent RNG so if/when OpenSSL gets
integrated into Windows we can side-step this whole issue.

The other (not mutually-exclusive) option available to us is to
implement our own PRNG using Mersenne Twister or some other
high-quality PRNG.  This has a few advantages over what we have right

* Even if OpenSSL isn't compiled in, we'll still have a good RNG source

* We could implement a --seed option to generate the *same* set of IPs
across all operating systems Nmap runs on

* There is a big coolness factor associated with quality PRNGs ;-)

Assuming we end up using OpenSSL the only real reason we would want a
fallback PRNG would be so that we can implement --seed.  If no one
thinks --seed would be useful there isn't much point in doing anything
other than moving to OpenSSL.

Finally, one thing that hasn't been addressed in this patch is that on
*nix, Nmap first tries "arandom" and then "urandom" before trying
"random".  If a box doesn't offer [au]random but does offer random Nmap
will block -- /dev/random doesn't provide randomness as fast as Nmap
uses it.  The use of /dev/random should be dropped in favor of falling
back on rand() or in the future, OpenSSL.

If anyone thinks --seed would be useful now is the time to chime in!


Version: GnuPG v2.0.7 (GNU/Linux)


Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]