Home page logo

pauldotcom logo PaulDotCom mailing list archives

Re: Looking for a good web spider
From: Matt Erasmus <matt.erasmus () gmail com>
Date: Sat, 25 Sep 2010 15:26:41 +0200


On 25 September 2010 02:46, Adrian Crenshaw <irongeek () irongeek com> wrote:
    I'm looking at some of the tools in BT4R1, and will be looking at what
Samurai WTF has to offer once I finish downloading the latest version. I'm
looking for some sort of spider that lets me do the following:

1. Follow every link on a page, even onto other domains, as long as the top
level domain name is the same (edu, com, cn, whatever)
2. For every page it visits, it collect the file names of all resources.
3. The headers so I can see the server version.
4. Grab the robots .txt if possible.

I'd probably stick with wget and a simple bit of bash scripting.

      wget --spider -r -o log.txt http://myballsaresore.com

Matt Erasmus <matt.erasmus () gmail com>
Pauldotcom mailing list
Pauldotcom () mail pauldotcom com
Main Web Site: http://pauldotcom.com

  By Date           By Thread  

Current thread:
[ Nmap | Sec Tools | Mailing Lists | Site News | About/Contact | Advertising | Privacy ]