mailing list archives
Re: [NSE] New script http-unsafe-output-encoding
From: Patrik Karlsson <patrik () cqure net>
Date: Thu, 15 Dec 2011 10:30:19 +0100
On Thu, Dec 15, 2011 at 8:40 AM, Martin Holst Swende <martin () swende se>wrote:
On 12/15/2011 07:20 AM, Patrik Karlsson wrote:
On Sun, Dec 11, 2011 at 9:56 PM, Martin Holst Swende <martin () swende se>wrote:
On 12/11/2011 08:52 PM, Patrik Karlsson wrote:
I just committed a new script called http-grep. It does pretty much what
the name suggests and enables you to search for patterns within spidered
I've included a few example usages and their responses, but the script
obviously be used for a lot more:
You're on fire!
I also threw together a script, based on an old tool I wrote a long time
ago and which serves me very well (https://bitbucket.org/holiman/jinx)
I basically ported it to nmap using the new spider. What it does is:
- Checks if a spidered page contained parameters
- If so, checks if any of these were reflected on the page ( e.g,
"foobar" and "funzip" was found)
- If N reflections were found, creates N new urls:
-- The payload is this : ghz>hzx"zxc'xcv
- For each of these N new links, it fetches the content. In the content,
it checks if any of the "dangerous" characters were reflected without
If any such things are found, chances are high this page is vulnerable
to reflected XSS.
Thanks for the contribution Martin! I've renamed the script to
http-unsafe-output-escaping and made some minor cleanup.
It's committed as r27488.
If we ever implement a html parser (and I mean a proper lexer-based
parser, not a regexp based "parser", see
), this script can be improved upon quite a bit. The imho best way to do
this is to
1) Check where the reflected content is (what context). Common cases:
1.2 <tag attr="$content" ..
1.3 <tag attr='$content' ...
1.4 <tag attr=$content ...
1.5 other or unknown because of invalid html
2) Depending on where the reflection(s) occurred, check only the
characters required to break out of context (and potentially execute
1.1 < >
Patrick Donnely was interested in adding Lua LPeg a while back, perhaps we
can find import some good html parser implementation based on LPeg? If we
have that, I think it could be useful for a lot of other scripts and also
the spider, which could use it to tackle non trivial link parsing such as
the <base> tag and parsing forms.
A good parser would certainly make things a lot easier. LPeg has been
discussed a few times and I'm not sure where we're currently at with that.
In regards to the base tag, there's already support for that in the spider.
Sent through the nmap-dev mailing list
Archived at http://seclists.org/nmap-dev/