mailing list archives
Re: Web crawling library proposal
From: Patrick Donnelly <batrick () batbytes com>
Date: Wed, 19 Oct 2011 15:45:06 -0400
On Wed, Oct 19, 2011 at 3:25 AM, Paulino Calderon
<paulino () calderonpale com> wrote:
I'm attaching my working copies of the web crawling library and a few
scripts that use it. It would be great if I can get some feedback.
For the library itself:
o I'm not convinced a Queue implementation is necessary. I'd prefer
just using table.insert/table.remove until evidence is presented it is
a performance block.
o Libraries should not use the registry. Provide an interface to
access private data instead.
o is_url_absolute should anchor the pattern search to the beginning of the URI
o Make get_sitemap return an iterator instead of a table of results.
o Does get_sitemap return the URI for every site that's been crawled?
Shouldn't it return what we requested it to crawl? It would appear if
two scripts try to crawl at the same time, bad things happen with the
global queue structures (among other things).
- Patrick Donnelly
Sent through the nmap-dev mailing list
Archived at http://seclists.org/nmap-dev/