Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: Slow name-resolution of very large target list
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Fri, 23 May 2008 02:24:03 +0000

Hash: SHA1

Hi Doug,

I went ahead and decided to give Nuff a try.  Very impressive so far.
It's a little to aggressive against when I give it a list of more than
about 200 names but I'm happy to chunk my input.

While testing, I found an input list that causes Nuff to error out with:
Error: set-input-port: needs 1 argument(s)

I can't figure out what about the list is causing this problem.  When I
delete just a few random hosts out of the list it passes.  When I scan
just the first 200 or last 200 it passes.  I can reproduce the problem
but I can't figure out what variables cause it to pass or fail when I
adjust the input list.  I'm including the list so that you can take a


On Thu, 22 May 2008 18:37:24 -0700
doug () hcsw org wrote:

Hi Brandon,

On Thu, May 22, 2008 at 02:13:49PM -0700 or thereabouts, Fyodor wrote:
On Thu, May 22, 2008 at 07:16:31AM +0000, Brandon Enright wrote:
I've tried the scan from another network that has access to many
very fast local DNS servers and have specified them with
--dns-servers but that didn't seem to make any noticeable

Fyodor beat me to it--I was about to say almost the exact same thing.
The nmap DNS system is for reverse DNS only and other types of DNS
were never in the specs when nmap_dns was created. Like Fyodor says,
when you do need to do this, it is usually easiest to use a separate
program. I think the adns library has an example that does exactly
this. (?)

SHAMELESS PLUG: Nuff has a program that will do this for you, but
how well it can scale to millions of resolution, I don't know:


$ printf 'a.com\nb.com\n' | nuff resolve -stdin

Alternatively, there is a pattern I frequently use for doing things
in parallel. I call it fork/wait:

# fork/wait skeleton by frax (patent pending)

$parallelism = 10;

while(<>) {
  if (fork() == 0) {
    system("host -t A $_");
  if ($children >= $parallelism) {

while ($children > 0) {

The above will do up to 10 resolutions in parallel. Use it like this:

cat domains.txt | perl pwnyresolv0r.pl > output.txt

The output should be like the following. It will require a little
post-processing and the results WILL NOT be in the same order as your
input file.

hcsw.org has address
slashdot.org has address
google.com has address
google.com has address
google.com has address

Note for doing millions of resolutions this may not be efficient
enough because every resolution spawns 3 (!) processes: Another perl
instance, the shell created by system(3), and an instance of host(1).
For short lived tasks like DNS resolution the process overhead might
be too much. However, fork/wait is a great way to parallelise many
types of IO bound programs.

Hope this helps,


PS. Be careful about locking your output stream. I think the above
code is OK because they will all be atomic writes, but if you're not
careful you can get things like:

hcsw.org hslashdot.org has address
as address
Version: GnuPG v2.0.9 (GNU/Linux)


Attachment: crashlist.txt

Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]