mailing list archives
Re: Memory problem when scanning testrange
From: Dieter Van der Stock <dietervds () gmail com>
Date: Wed, 20 May 2009 09:35:18 +0200
@Brandon: Thanks for typing up such a clear reply! It is most helpfull.
@Fyodor: Apoligies that I haven't included that info before :) The machine
in question has only 512K ram, so I'm not expecting much of it. Still, I'm
going to run the test again today or tomorrow, or a couple, and try to keep
an eye on top stats. I'll let the list know when I have gathered a bit more
2009/5/19 Fyodor <fyodor () insecure org>
On Tue, May 19, 2009 at 02:15:00PM +0200, Dieter Van der Stock wrote:
While trying to run an Nmap scan against a 10.10.0.0/17 range, the Nmap
process is automatically killed because of too much memory usage.
In syslog it says (after a whole output dump of memory stuff):
kernel: Out of memory: kill process 28648 (bash) score 8319 or a child
kernel: Killed process 28662 (nmap)
The Nmap command run was:
/usr/bin/nmap -T4 -n -p 1-65535 -oX largescan.xml 10.10.0.0/17
The version of Nmap being used: Nmap 4.85BETA9
Does anyone have any idea what can be done to prevent this?
I suppose it's not an everyday-usage scenario of Nmap, but I'm basicly
checking out how far I can push it :)
Hi Dieter. How much RAM do you have on the system? How much of it is
free (not used by all the other applications running) before you start
Nmap? How much is Nmap using when you look at it in top or the like?
What does the growth look like? Does it start out more reasonable and
then over the hours/days of the scan continue growing more and more?
Your command is not unreasonable, and we should make sure that Nmap
does not use an unreasonable amount of memory in that case. For
example, we could reduce the default host group size when so many
ports are being scanned. Or maybe there is a memory leak we can fix,
or in-memory structures we can optimize.
Sent through the nmap-dev mailing list
Archived at http://SecLists.Org