mailing list archives
Re: Denial of Service in WordPress
From: Jann Horn <jann () thejh net>
Date: Sat, 29 Jun 2013 08:20:27 +0200
On Fri, Jun 28, 2013 at 08:55:57PM +0300, MustLive wrote:
Why do you think it will be very slowly? For last 5.5 years you the first said me concerning Looped DoS that requests
will be sending very slowly. So think about it. Because all those web sites owners and all those web developers, in
which web applications I've found Looped DoS vulnerabilities, after my informing fixed the holes or said that they
will take it into account, but never used such argument.
Yeah, I think there are two big reasons for that:
- It may not be a vuln, but it could be a nuisance.
- "Better safe than sorry" – if fixing it is faster than arguing, just fix it, even if it's not really an issue. it's
faster that way.
The requests speed will be the next (tested on http://tinyurl.com/loopeddos1):
- In average 5.83 - 7 requests/s for looped redirect with 301/302 responses. I.e. it takes 3-3.6 seconds for Firefox
to make 21 request before blocking redirect loop (and showing error message). Situation is similar in other browsers,
which support blocking. Didn't examine old IE, which doesn't block infinite loops, but the speed must be the same.
- The faster will be working target web sites, the faster will be request rate.
In other words, your "attack" behaves like a good load testing tool and slows down when the site becomes slower.
- It's for browsers, but there are also other clients. Especially such as bots with no redirection limits. Which can
work even faster.
Which other clients? I can only think of ones that would be called "bot", and I don't think any of those will crawl
anything with a speed >1req/s or so.
If they do, it's the bot owner's fault.
- If the looped requests will be going inside one domain, then the speed will be faster (and it'll useful for
attacking not only WordPress < 2.3, but also WP 2.3 - 3.5.2). And overload will not be splitting between two domains
(like it's showing in my two examples with tinyurl.com).
- Open two or more iframes with looped redirect to the same site, to multiply the speed of attack.
- Make sufficient amount of clients (people or bots) to unknowingly participate in the attack, such as 1000 and more
clients and it'll be sufficient to DoS the site on slow server.
Anonymous actually tested that approach for you, google "js loic" or so (apart from the fact that the
participants mostly had an idea of what they were doing). Yeah, it does work, but there isn't much
a site owner can do to prevent it in his webapp – however, he can blacklist the originating IPs so
that the browsers' connections all time out, thereby slowing down the attack. Also, modern Chromium
detects such a scenario and actively delays connections to a server that seems to be down.
Note that every attack is going infinitely (at using appropriate clients or at using JS or meta-refresh to prevent
normal browsers from stopping endless loop), not just single request from every client.
Only if people knowingly participate. Otherwise, your browser window will eventually be closed (although
it can take some time).
No need to think that in 2013 every web site owner has resources like Google has.
I don't believe that many people here assume that.
There are a lot of sites on slow servers
and there are a lot of sites with redirectors
Sure (and that's not a problem).
(and even real Looped DoS holes are rare, but with using of redirectors it's possible to create such one at any web
site with redirector).
Description: Digital signature
Full-Disclosure - We believe in it.
Hosted and sponsored by Secunia - http://secunia.com/