On Mon, 2016-10-03 at 20:28 +0100, John Graham-Cumming wrote:
- Benign GET / repeated 1000 times per second. That's a DoS on
the server
Is this going to work over Tor anyways? I suppose your concern would be PHP, etc. that falls over much faster than the web server calling it, no?
- Shellshock. Looks like a benign GET / but nasty payload in
User-Agent header
- Simple GET but with SQLi in the URI
I suppose you're not worried about targeted attack per se here, as they can always solve the current CAPTCHA, but automated attackers who attempt attacks on many servers, no?
Are these serious concerns? I suppose they're more serious than the DoS concerns, so that sounds bad from the token stockpiling perspective.*
On Mon, 2016-10-03 at 20:57 +0100, John Graham-Cumming wrote:
I guess, but that seems sad. We should filter out bad requests and allow legitimate users to get access to web sites (static or not).
I'm wondering if the extension should first attempt to load the page in a way that ensures it does not need to spend a token. If that fails, there might be tricks to avoid the request entirely. Images could be dropped when the cache fails to serve them, for example.
Jeff
* If this becomes an issue, there is an approach that might work : Just use multiple signing keys, one system wide key C for all CloudFlare sites, and individual site keys for each site CloudFlare protects. If you solve a CAPTCHA then you withdraw a moderate stack of C tokens. If you visit site X then you spend an X token if you have one, but if you do not then you spend a single C token to withdraw tens of thousands of X tokens. So solving a CAPTCHA is worth hundreds of thousands of page loads, but only across a moderate number of sites. We could've separate Cbig and Csmall keys such that first it withdraws with Csmall, but if the users blows through that quickly then it withdraws with Cbig.