Home page logo

nmap-dev logo Nmap Development mailing list archives

PRNG benchmark results and proposal
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Mon, 5 May 2008 00:45:05 +0000

Hash: SHA1

Rather than go off coding without a solid idea of the strengths and
weakness of various PRNG options, Fyodor suggested that I benchmark our

I went ahead and wrote a little stand-alone C application to test each
PRNG option we've come up with:

* /dev/urandom

* openssl

* rand() (using a byte at a time like on Windows)

* rand() (using a short at a time like on *nix)

* dnet_rand

For each option, I tested how long it takes to generate 0x0FFFFFFF
bytes (256 megs).  I used the same interface Nmap uses with
get_random_bytes() using a 2kb cache and get_random_u32() to get 4
bytes at a time.  This helps amortize the cost of opening /dev/urandom
but is actually slower for most of the other options.

I've run each PRNG 3 times on two different computers.  The first group
of 3 times is a fast Pentium 4 (x86).  The second group of three is a
_very_ fast Xeon (x86_64).

real    3m9.697s
real    3m8.420s
real    3m7.719s

real    1m17.290s
real    1m15.351s
real    1m15.276s

real    2m2.557s
real    2m1.195s
real    1m54.029s

real    1m19.549s
real    1m23.772s
real    1m22.565s

real    0m12.468s
real    0m12.467s
real    0m13.914s

real    0m3.766s
real    0m3.763s
real    0m3.987s

real    0m8.555s
real    0m8.024s
real    0m8.014s

real    0m2.157s
real    0m2.316s
real    0m2.165s

real    0m7.077s
real    0m6.452s
real    0m6.421s

real    0m2.573s
real    0m2.577s
real    0m2.566s

Quite surprisingly, /dev/urandom is at least as slow as OpenSSL.  The
calls to rand() are fast but as discussed in a previous thread, can be
of very poor quality.  DNet's ARC4 PRNG is by far the fastest option
and offers arguably as high a quality random stream as /dev/urandom or

So, I propose we change all random number generation to use DNet's PRNG
at all times.  It is fast, portable C, already included in Nmap, and
quite a bit higher quality than we actually /need/.

Assuming this is a good choice, the hard part is in deciding /how/ we
include it.  The caching provided by the current get_random_bytes()
routine is unnecessary overhead if we use dnet.  Also, DNet's rand.c
library provides nearly the exact same API as Nmap's nbase_rnd.c but
uses different typedef'd named sizes (u32 vs uint32_t for example).
Also, we probably don't want to cause nbase to depend on dnet.

Here's how I suggest we precede:

* We re-write rand.c to the nbase coding style, typedef'd names, and
nbase_rnd API (this is a nearly 1-to-1 translation in all regards)
while eliminating unneeded functionality (adding entropy, random
sorting, etc)

* Remove rand.c from dnet; our stripped-down version of dnet doesn't
actually make use of it anyways

* All of Nmap's existing calls into nbase_rnd can be left alone, all
calls to rand() be adapted to new nbase_rnd implementation

The primary reason for the proposed re-write is not to side-step DNet's
BSD license.  If we wrap rand.c with nbase_rnd we'll have a layer of
redundancy that will add inefficiency.  It will also add a dependency
to nbase that it doesn't currently have.  The re-write is short and
simple so the benefit will outweigh the cost of the re-write.

I'd like someone to weigh in on BSD/GPL licensing conflicts before I
proceed to either use rand.c or re-write rand.c for nbase_rnd.


Version: GnuPG v2.0.9 (GNU/Linux)


Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]