Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: Memory problem when scanning testrange
From: Brandon Enright <bmenrigh () ucsd edu>
Date: Tue, 19 May 2009 16:37:04 +0000

Hash: SHA1

On Tue, 19 May 2009 14:15:00 +0200 or thereabouts Dieter Van der Stock
<dietervds () gmail com> wrote:

Hello everyone,

While trying to run an Nmap scan against a range, the
Nmap process is automatically killed because of too much memory usage.

I run into this pretty regularly.  I just have to be careful how big
the hostgroups I pick are and how many scans I run concurrently.

In syslog it says (after a whole output dump of memory stuff):
kernel: Out of memory: kill process 28648 (bash) score 8319 or a child
kernel: Killed process 28662 (nmap)

The Nmap command run was:
/usr/bin/nmap -T4 -n -p 1-65535 -oX largescan.xml

Well 65535 ports * 32768 hosts = 2147450880 ports

Now, Nmap doesn't try to tackle it all at once, it does them in
hostgroups which are generally <= 256

The version of Nmap being used: Nmap 4.85BETA9

Does anyone have any idea what can be done to prevent this?
I suppose it's not an everyday-usage scenario of Nmap, but I'm basicly
checking out how far I can push it :)

The usage scenario you have above in unrealistic for TCP but if you
really want it to work add --max-hostgroup 32 to your scan.  You didn't
say how much free memory is available on your box or if it was a 64bit
system like x86_64 (significantly increases memory usage) but I
generally expect very large scans to use up between 4 and 6 GB of

Cheers and with regards to you all,


Unfortunately the above usage scenario may not be terribly realistic
for TCP but it is common for UDP.  There is a TODO item to look at
memory usage with large UDP scans but I don't think the problem is at
all limited to UDP.  I think TCP consumes just as much memory but that
we don't notice it like we do with UDP because we don't do monster scans
with TCP.

Also note that even if you have enough memory to actually scan, the act
of outputting the your scan results may double (or more) your memory
usage while the string is being constructed in memory and written to
the screen/a file.  To fix this significant memory spike on output,
we'd have to change entierly how output.cc and some of the supporting
code works and is designed.

If you really want to scan 64k ports on a /17 expect it to take about
24 hours with a average parallelism of 40 and using about 6GB of ram.


Version: GnuPG v2.0.11 (GNU/Linux)


Sent through the nmap-dev mailing list
Archived at http://SecLists.Org

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]