Home page logo
/

nmap-dev logo Nmap Development mailing list archives

Re: hadoop and hbase information gathering
From: David Fifield <david () bamsoftware com>
Date: Wed, 12 Oct 2011 15:07:45 -0700

On Sat, Oct 08, 2011 at 03:48:14PM +0200, John Bond wrote:
I have written a couple of scripts to scrape various hadoop and hbase
status pages.  would be great to get feed back on wether this works
with different version this is tested on hadoop 0.20.  All scripts
also implement newtargets where appropriate so you can run a command
like the following to discover an entire hadoop cluster

nmap  --script 
hadoop-datanode-info,hbase-master-info,hadoop-jobtracker-info,hadoop-namenode-info,hadoop-tasktracker-info,hbase-region-info.nse
--script-args hadoop-jobtracker-info.userinfo,newtargets -p
60010,50030,50070,50075,50060,60030 master-hadoop-server.example.com

Thanks for these nice scripts, John.

All the scripts seem to work in a similar way: Grab a web page, do some
matching on the response body, and format the results. For this reason I
think it would be wise to factor out the common behavior. Patrik
proposed an httpmatch library that might be appropriate:

http://seclists.org/nmap-dev/2011/q2/377

But I'm thinking it may be better to add the scripts first, and refactor
later.

Can you recommend an easy way to set up Hadoop and Hbase to test?

David Fifield
_______________________________________________
Sent through the nmap-dev mailing list
http://cgi.insecure.org/mailman/listinfo/nmap-dev
Archived at http://seclists.org/nmap-dev/


  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]
AlienVault