Home page logo

nmap-dev logo Nmap Development mailing list archives

Re: Fwd: hadoop and hbase information gathering
From: John Bond <john.r.bond () gmail com>
Date: Wed, 9 Nov 2011 20:32:08 +0100

On 1 November 2011 04:52, David Fifield <david () bamsoftware com>
wrote:> On Sun, Oct 30, 2011 at 10:46:33AM +0100, John Bond wrote:>>
On 14 October 2011 00:14, John Bond <john.r.bond () gmail com> wrote:>>
Okay. I can see the reason for this. All these different scripts run>
against different ports, but they are all HTTP. Patrick found that
his> university's Hadoop ran on different ports than the default.>>
Using shortport.http should take these scripts out of default, I
think,> because they will only get a response from a minority of web
servers. I> might even modify the rule to be "got a service match for
HTTP, but it> is *not* running on a common HTTP port." Then it could
be default again.Ok i think i get what you mean i have updated the
port rule to us the following
portrule = function(host, port)
        local force = stdnse.get_script_args('hadoop-info.force')
        if not force then
                return shortport.http and port.number ~= 80  and
port.number ~= 443
                return true

this also allows a user to pass hadoop-info.force to force the script
to run even on port 80/443.  hadoop-info.force applies to all the
hadoop-*-info scipts.  hbase-info.force applies to all the
hbase-*-info scripts thought it better then having a separate arg for
each script.

I'm curious, what does a plain -sV scan output for these ports?> 
http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfs_user_guide.html says> "The NameNode and Datanodes have built in web 
Its all "Jetty httpd 6.1.26" at least on the versions i have

If we could do a> quick check retrieval of /index.html (which would be cached) and use> that to control whether the 
other scripts run, then they could be> default too.I am not to sure what you mean here; however when the script runs 
the first thing it dose is try and get the appropriate start page and if this is not a 200 then the script exits.  
Although i will send you the index pages from each service off list

However these changes have introduced another issue.  When using>> newtargets the port rule is not triggered, and 
therefore scripts dont>> run for the newtargets.  Haven't looked at this yet but wondered if it>> is a known 
issue?>> Why doesn't the portrule trigger? Are the new targets running the same> services on the same ports?I have 
tested this again and it worked, i think this could have been an issue of user error.

It's a known issue. Let's not worry about it too much now. The target> may be scanned twice but not three times, as 
newtargets checks for> duplicate targets that it adds itself.Ok

On 8 November 2011 17:03, David Fifield <david () bamsoftware com> wrote:

I have committed all the scripts. What I have done is restore the
original targeted portrules and leave the scripts in the "default"
category. Unfortunately this means that they won't work for environments
like Patrick's where the ports aren't the default. I'm open to ideas to
fix this.
Ok cool, The changes mentioned above are avalible here

I'm still interested in findout out what plain -sV reports for these
Hadoop HTTP servers.
See above let me know if you need more info

Also sorry for the late response been away at confrences most of the
last 3 weeks

cheers john
Sent through the nmap-dev mailing list
Archived at http://seclists.org/nmap-dev/

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]