Hi
I'm the happy maintainer of wardsback : B143D439B72D239A419F8DCE07B8A8EB1B486FA7
As many of us have noticed, many guard nodes are beeing abused by extremely high numbers of connection attempts. Thanks to some of you guys, I manged to put some mitigation in place [0] and I assume many of us did as well.
I now sit back with questions and concerns arising :
1) Why didn't we see this abuse wave coming ? We kept replying to reporters of the dreaded "Failing because we have XXX connections already. Please read doc/TUNING for guidance" about how they could amend their config to accept more connections. Although the 'global scale' of those events should have been detected, without most of use assuming it was due to nodes' bad config.
2) We can see on Metrics [1] that guards count is dropping rapidly for a couple weeks now. Presumably because many guard maintainers gave up on restarting their crushed node. (I never will. Even though my Metrics graph shows I've also been in trouble)
3) What could we do to better detect those 'attacks' and spread the word to fellow maintainers about how to mitigate / correct the situation ?
I must admit I don't have a valuable clue about how things can technically be improved, but I humbly wanted to share a few thought here.
Peace
[0] : https://lists.torproject.org/pipermail/tor-relays/2017-December/013846.html [1] : https://metrics.torproject.org/relayflags.html?start=2017-09-21&end=2017...
On 21 Dec 2017, at 03:22, fcornu@wardsback.org wrote:
Hi
I'm the happy maintainer of wardsback : B143D439B72D239A419F8DCE07B8A8EB1B486FA7
As many of us have noticed, many guard nodes are beeing abused by extremely high numbers of connection attempts. Thanks to some of you guys, I manged to put some mitigation in place [0] and I assume many of us did as well.
I now sit back with questions and concerns arising :
- Why didn't we see this abuse wave coming ? We kept replying to reporters of the dreaded "Failing because we have XXX connections already. Please read doc/TUNING for guidance" about how they could amend their config to accept more connections. Although the 'global scale' of those events should have been detected, without most of use assuming it was due to nodes' bad config.
Load spikes are normal, particularly with the HSDir flag, because HSDir usage is not bandwidth-weighted.
Allowing more connections *is* the right thing to do with this attack, if your OS has the resources. Several of my relays never went down, because they were over-provisioned with RAM and CPU.
Others only went down temporarily, during the most intense phases. (And then their excessive bandwidth weight was redistributed, and they have been coping well.)
If you don't have the resources to handle that many connections, then limiting connections is the right thing to do. If you can't do it using tor, then a firewall is the way to go.
(There are some bugs in Tor that make the attack more effective than it should be. We're working on fixing them.)
- We can see on Metrics [1] that guards count is dropping rapidly for a couple weeks now. Presumably because many guard maintainers gave up on restarting their crushed node. (I never will. Even though my Metrics graph shows I've also been in trouble)
Nodes lose the Guard flag when they go down or restart.
If they are set to automatically restart, it will come back eventually.
If they are not, hopefully operators will restart crashed relays.
- What could we do to better detect those 'attacks' and spread the word to fellow maintainers about how to mitigate / correct the situation ?
That's a good question. Detecting new attacks is hard!
And some of us are busy trying to fix this one :-)
...
[0] : https://lists.torproject.org/pipermail/tor-relays/2017-December/013846.html [1] : https://metrics.torproject.org/relayflags.html?start=2017-09-21&end=2017...
-- Tim / teor
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n ------------------------------------------------------------------------
On 21 Dec 2017, at 08:51, teor teor2345@gmail.com wrote:
- Why didn't we see this abuse wave coming ? We kept replying to reporters of the dreaded "Failing because we have XXX connections already. Please read doc/TUNING for guidance" about how they could amend their config to accept more connections. Although the 'global scale' of those events should have been detected, without most of use assuming it was due to nodes' bad config.
Load spikes are normal, particularly with the HSDir flag, because HSDir usage is not bandwidth-weighted.
Allowing more connections *is* the right thing to do with this attack, if your OS has the resources. Several of my relays never went down, because they were over-provisioned with RAM and CPU.
Others only went down temporarily, during the most intense phases. (And then their excessive bandwidth weight was redistributed, and they have been coping well.)
If you don't have the resources to handle that many connections, then limiting connections is the right thing to do. If you can't do it using tor, then a firewall is the way to go.
(There are some bugs in Tor that make the attack more effective than it should be. We're working on fixing them.)
To mitigate this attack, we recommend setting MaxMemInQueues to the amount of RAM you have available per tor instance (or maybe a few hundred MB less).
Tor estimates it, but the estimate isn't very good.
T
-- Tim / teor
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n ------------------------------------------------------------------------
Le 20/12/2017 à 23:15, teor a écrit :
On 21 Dec 2017, at 08:51, teor teor2345@gmail.com wrote:
- Why didn't we see this abuse wave coming ? We kept replying to reporters of the dreaded "Failing because we have XXX connections already. Please read doc/TUNING for guidance" about how they could amend their config to accept more connections. Although the 'global scale' of those events should have been detected, without most of use assuming it was due to nodes' bad config.
Load spikes are normal, particularly with the HSDir flag, because HSDir usage is not bandwidth-weighted.
Allowing more connections *is* the right thing to do with this attack, if your OS has the resources. Several of my relays never went down, because they were over-provisioned with RAM and CPU.
Others only went down temporarily, during the most intense phases. (And then their excessive bandwidth weight was redistributed, and they have been coping well.)
If you don't have the resources to handle that many connections, then limiting connections is the right thing to do. If you can't do it using tor, then a firewall is the way to go.
This has been put in place and relay is now able to sustain the still ongoing flood.
(There are some bugs in Tor that make the attack more effective than it should be. We're working on fixing them.)
To mitigate this attack, we recommend setting MaxMemInQueues to the amount of RAM you have available per tor instance (or maybe a few hundred MB less).
Tor estimates it, but the estimate isn't very good.
This has been added about 12 hours ago (and relay SIGHUPed) and I still cannot see any trace of circuit OOM kills in relay logs.
And the 2 most recent heartbeat reports show a 'normal' circuit count
Thanks for all the fish :)
On Wed, 20 Dec 2017 17:22:54 +0100 fcornu@wardsback.org allegedly wrote:
Hi
I'm the happy maintainer of wardsback : B143D439B72D239A419F8DCE07B8A8EB1B486FA7
And I run 0xbaddad - EA8637EA746451C0680559FDFF34ABA54DDAE831 a guard (though whether it stays a guard depends. It keeps falling over.)
As many of us have noticed, many guard nodes are beeing abused by extremely high numbers of connection attempts. Thanks to some of you guys, I manged to put some mitigation in place [0] and I assume many of us did as well.
I'm still looking at mitigation. I'd rather not add iptables filter rules because it feels like the wrong thing to do (I might hurt legitimate connections) and at the wrong end of the stack. I'd prefer there to be mitigations available at the application end (Tor itself). But realistically I know that that is difficult and the Tor developer team are still working hard at this problem. (As an aside, I'd be very grateful for any feedback from other relay operators who /have/ added iptables "connlimit" rules. What is your view either way?)
I'm only sticking my head above the parapet now to note what I am seeing.
So: My logs show Tor staying up for around 10 minutes at a time before rebooting with the following sort of entries:
Dec 21 16:25:44.000 [notice] Performing bandwidth self-test...done. Dec 21 16:35:20.000 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) opening log file. Dec 21 16:35:20.946 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0f, Zlib 1.2.8, Liblzma 5.2.2, and Libzstd 1.1.2. Dec 21 16:35:20.947 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Dec 21 16:35:20.947 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc". Dec 21 16:35:20.947 [notice] Read configuration file "/etc/tor/torrc". Dec 21 16:35:20.951 [notice] Based on detected system memory, MaxMemInQueues is set to 369 MB. You can override this by setting MaxMemInQueues by hand. Dec 21 16:35:20.952 [notice] Opening Control listener on 127.0.0.1:9051 Dec 21 16:35:20.953 [notice] Opening OR listener on 0.0.0.0:9001 Dec 21 16:35:20.000 [notice] Not disabling debugger attaching for unprivileged users. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Dec 21 16:35:22.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now. Dec 21 16:35:22.000 [notice] Your Tor server's identity key fingerprint is '0xbaddad EA8637EA746451C0680559FDFF34ABA54DDAE831' Dec 21 16:35:22.000 [notice] Bootstrapped 0%: Starting Dec 21 16:35:31.000 [notice] Starting with guard context "default" Dec 21 16:35:31.000 [notice] Bootstrapped 80%: Connecting to the Tor network Dec 21 16:35:31.000 [notice] Signaled readiness to systemd Dec 21 16:35:31.000 [notice] Opening Control listener on /var/run/tor/control Dec 21 16:35:31.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 10; recommendation warn; host CD14AE63A02686BAE838A8079449B480801A8A5F at 195.181.208.180:443) Dec 21 16:35:32.000 [warn] 9 connections have failed: Dec 21 16:35:32.000 [warn] 9 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 11; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 10 connections have failed: Dec 21 16:35:32.000 [warn] 10 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 12; recommendation warn; host 3DE7762DD6165FD70C74BD02A6589C8C0C1B020A at 62.210.76.88:9001) Dec 21 16:35:32.000 [warn] 11 connections have failed: Dec 21 16:35:32.000 [warn] 11 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 13; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 12 connections have failed: Dec 21 16:35:32.000 [warn] 12 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 14; recommendation warn; host 51939625169E2C7E0DC83D38BAE628BDE67E9A22 at 109.236.90.209:443) Dec 21 16:35:32.000 [warn] 13 connections have failed: Dec 21 16:35:32.000 [warn] 13 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 15; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 14 connections have failed: Dec 21 16:35:32.000 [warn] 14 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 16; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 15 connections have failed: Dec 21 16:35:32.000 [warn] 15 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 17; recommendation warn; host 1FA8F638298645BE58AC905276680889CB795A94 at 185.129.249.124:9001) Dec 21 16:35:33.000 [warn] 16 connections have failed: Dec 21 16:35:33.000 [warn] 16 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443) Dec 21 16:35:33.000 [warn] 17 connections have failed: Dec 21 16:35:33.000 [warn] 17 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Dec 21 16:35:33.000 [notice] Bootstrapped 100%: Done
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online. "netstat" then shows my connections rising rapidly to around the 10,000-11,000 "ESTABLISHED" mark before it all goes wrong again.
As others have noted I see multiple connections from OVH (netblock 54.36.51/24 (around 1200, when I normally only see a max of 200 or so per /24, and a more normal dozen or so per /24). The next largest, at around 700-800 is 144.76.175/24 (Hetzner Online). I don't recall seeing that level of connections in the past.
If anyone wants more info, let me know.
Best
Mick
--------------------------------------------------------------------- Mick Morgan gpg fingerprint: FC23 3338 F664 5E66 876B 72C0 0A1F E60B 5BAD D312 http://baldric.net
---------------------------------------------------------------------
Hi mick
And I run 0xbaddad - EA8637EA746451C0680559FDFF34ABA54DDAE831 a guard (though whether it stays a guard depends. It keeps falling over.)
Still guard
(As an aside, I'd be very grateful for any feedback from other relay operators who /have/ added iptables "connlimit" rules. What is your view either way?)
It's currently good to be restrictive. May-be a *per ip* limit of 20 (slow DoS) and a *per ip* rate of 1 per sec (fast DoS) is good. I am on Freebsd so I can not give you a good idea. May-be try what tordoswitchhunter in [1] recomments (/32 is good). You have to harvest your own hostile IPs :/
So: My logs show Tor staying up for around 10 minutes at a time before rebooting with the following sort of entries:
Dec 21 16:25:44.000 [notice] Performing bandwidth self-test...done. Dec 21 16:35:20.000 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) opening log file. Dec 21 16:35:20.946 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0f, Zlib 1.2.8, Liblzma 5.2.2, and Libzstd 1.1.2. Dec 21 16:35:20.947 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Dec 21 16:35:20.947 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc". Dec 21 16:35:20.947 [notice] Read configuration file "/etc/tor/torrc". Dec 21 16:35:20.951 [notice] Based on detected system memory, MaxMemInQueues is set to 369 MB. You can override this by setting MaxMemInQueues by hand. Dec 21 16:35:20.952 [notice] Opening Control listener on 127.0.0.1:9051 Dec 21 16:35:20.953 [notice] Opening OR listener on 0.0.0.0:9001 Dec 21 16:35:20.000 [notice] Not disabling debugger attaching for unprivileged users. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Dec 21 16:35:22.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now. Dec 21 16:35:22.000 [notice] Your Tor server's identity key fingerprint is '0xbaddad EA8637EA746451C0680559FDFF34ABA54DDAE831' Dec 21 16:35:22.000 [notice] Bootstrapped 0%: Starting Dec 21 16:35:31.000 [notice] Starting with guard context "default" Dec 21 16:35:31.000 [notice] Bootstrapped 80%: Connecting to the Tor network Dec 21 16:35:31.000 [notice] Signaled readiness to systemd Dec 21 16:35:31.000 [notice] Opening Control listener on /var/run/tor/control Dec 21 16:35:31.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 10; recommendation warn; host CD14AE63A02686BAE838A8079449B480801A8A5F at 195.181.208.180:443) Dec 21 16:35:32.000 [warn] 9 connections have failed: Dec 21 16:35:32.000 [warn] 9 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 11; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 10 connections have failed: Dec 21 16:35:32.000 [warn] 10 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 12; recommendation warn; host 3DE7762DD6165FD70C74BD02A6589C8C0C1B020A at 62.210.76.88:9001) Dec 21 16:35:32.000 [warn] 11 connections have failed: Dec 21 16:35:32.000 [warn] 11 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 13; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 12 connections have failed: Dec 21 16:35:32.000 [warn] 12 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 14; recommendation warn; host 51939625169E2C7E0DC83D38BAE628BDE67E9A22 at 109.236.90.209:443) Dec 21 16:35:32.000 [warn] 13 connections have failed: Dec 21 16:35:32.000 [warn] 13 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 15; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 14 connections have failed: Dec 21 16:35:32.000 [warn] 14 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 16; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 15 connections have failed: Dec 21 16:35:32.000 [warn] 15 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 17; recommendation warn; host 1FA8F638298645BE58AC905276680889CB795A94 at 185.129.249.124:9001) Dec 21 16:35:33.000 [warn] 16 connections have failed: Dec 21 16:35:33.000 [warn] 16 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443) Dec 21 16:35:33.000 [warn] 17 connections have failed: Dec 21 16:35:33.000 [warn] 17 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Dec 21 16:35:33.000 [notice] Bootstrapped 100%: Done
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online.
IMO your tor searches for guards and they are under load, gone or lost their guard flag. Finally you found a guard :)
[1] https://lists.torproject.org/pipermail/tor-relays/2017-December/013839.html
On Thu, Dec 21, 2017 at 10:11:47PM +0100, Felix wrote:
It's currently good to be restrictive. May-be a *per ip* limit of 20 (slow DoS) and a *per ip* rate of 1 per sec (fast DoS) is good.
I'm getting up to speed on this issue (been absent for some days).
My current thought is that these are actually Tor clients, not intentional denial-of-service attacks, but there are millions of them so they are producing surprises and damage. (Also, maybe there is not a human behind each of the Tor clients, so maybe we shouldn't value them as much as we would value more Tor Browser users.)
I am on Freebsd
The Freebsd relays running Tor 0.3.2 will especially benefit from today's 0.3.2.8-rc release, because bug 24671 only affects non-Linux or ancient-Linux relays:
o Minor bugfixes (scheduler, KIST): - Use a sane write limit for KISTLite when writing onto a connection buffer instead of using INT_MAX and shoving as much as it can. Because the OOM handler cleans up circuit queues, we are better off at keeping them in that queue instead of the connection's buffer. Fixes bug 24671; bugfix on 0.3.2.1-alpha.
(Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443)
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online.
IMO your tor searches for guards and they are under load, gone or lost their guard flag. Finally you found a guard :)
Yes, I agree. (Though if they were gone or lost their guard flag, you would not have tried them and gotten a CONNECTREFUSED. So I think they are all suffering from the "under load" case. Gosh.)
--Roger
On 22 Dec 2017, at 10:08, Roger Dingledine arma@mit.edu wrote:
(Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443)
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online.
IMO your tor searches for guards and they are under load, gone or lost their guard flag. Finally you found a guard :)
Yes, I agree. (Though if they were gone or lost their guard flag,
Gone, yes.
But don't client circuits try previously selected guards, even if they don't have the guard flag right now? (I know we don't re-weight guards as new consensuses arrive. I don't know if we ignore them once they lose the guard flag.)
you would not have tried them and gotten a CONNECTREFUSED. So I think they are all suffering from the "under load" case. Gosh.)
Yes, this is probably a lack of file descriptors, and new connections are punished more severely than existing ones. T
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Markus
On 22. Dec 2017, at 00:25, teor teor2345@gmail.com wrote:
On 22 Dec 2017, at 10:08, Roger Dingledine arma@mit.edu wrote:
(Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443)
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online.
IMO your tor searches for guards and they are under load, gone or lost their guard flag. Finally you found a guard :)
Yes, I agree. (Though if they were gone or lost their guard flag,
Gone, yes.
But don't client circuits try previously selected guards, even if they don't have the guard flag right now? (I know we don't re-weight guards as new consensuses arrive. I don't know if we ignore them once they lose the guard flag.)
you would not have tried them and gotten a CONNECTREFUSED. So I think they are all suffering from the "under load" case. Gosh.)
Yes, this is probably a lack of file descriptors, and new connections are punished more severely than existing ones. T _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
Short answer:
https://i.imgur.com/8QLptcz.png https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix zwiebel@quantentunnel.de wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Out off 133 IPs blocked with my rather aggressive firewall ruleset:
leaseweb.com - 26 your-server.de - 66 ip-54-36-51.eu - 17
That was in < 24hrs.
On Dec 22, 2017 3:38 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Short answer:
https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix zwiebel@quantentunnel.de wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Thats “only” “relays” with multiple connections to your relay? Interesting to see Hetzner there …
Markus
On 22. Dec 2017, at 16:14, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Out off 133 IPs blocked with my rather aggressive firewall ruleset:
leaseweb.com http://leaseweb.com/ - 26 your-server.de http://your-server.de/ - 66 ip-54-36-51.eu http://ip-54-36-51.eu/ - 17
That was in < 24hrs.
On Dec 22, 2017 3:38 AM, "niftybunny" <abuse@to-surf-and-protect.net mailto:abuse@to-surf-and-protect.net> wrote: Short answer:
https://i.imgur.com/8QLptcz.png https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix <zwiebel@quantentunnel.de mailto:zwiebel@quantentunnel.de> wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org mailto:tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org mailto:tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Every IP I was checking through Atlas which are part of the mentioned hosts were NOT relays, all client connections.
On Dec 22, 2017 9:20 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Thats “only” “relays” with multiple connections to your relay? Interesting to see Hetzner there …
Markus
On 22. Dec 2017, at 16:14, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Out off 133 IPs blocked with my rather aggressive firewall ruleset:
leaseweb.com - 26 your-server.de - 66 ip-54-36-51.eu - 17
That was in < 24hrs.
On Dec 22, 2017 3:38 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Short answer:
https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix zwiebel@quantentunnel.de wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
I got also 17 from ovh (under ip-54-36-51.eu) and plenty of leaseweb.com (didn't count) too but no your-server.de
The OVH ones were interestingly 2 (nearby) consecutive blocks of 4 and 13 IPs (and are not relays)
On 22 December 2017 at 15:23, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Every IP I was checking through Atlas which are part of the mentioned hosts were NOT relays, all client connections.
On Dec 22, 2017 9:20 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Thats “only” “relays” with multiple connections to your relay? Interesting to see Hetzner there …
Markus
On 22. Dec 2017, at 16:14, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Out off 133 IPs blocked with my rather aggressive firewall ruleset:
leaseweb.com - 26 your-server.de - 66 ip-54-36-51.eu - 17
That was in < 24hrs.
On Dec 22, 2017 3:38 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Short answer:
https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix zwiebel@quantentunnel.de wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
All,
Just adding 0.02c; from the hosts going above 24 connections (my FW limit), the ASN's involved seem to focus on: 5 LEASEWEB-USA-WDC-01 - Leaseweb USA, Inc., US 18 OVH, FR 25 LEASEWEB-NL-AMS-01 Netherlands, NL
That's 48 from the 72 IP's exhibiting this behaviour. Whereby the leaseweb ones are consecutive IP's.
Careful not to share IP's here :-)
All seen from the perspective of SJC01 / 328E54981C6DDD7D89B89E418724A4A7881E3192
Stijn
On 22 Dec 2017, at 16:49, Pascal Terjan wrote:
I got also 17 from ovh (under ip-54-36-51.eu) and plenty of leaseweb.com (didn't count) too but no your-server.de
The OVH ones were interestingly 2 (nearby) consecutive blocks of 4 and 13 IPs (and are not relays)
On 22 December 2017 at 15:23, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Every IP I was checking through Atlas which are part of the mentioned hosts were NOT relays, all client connections.
On Dec 22, 2017 9:20 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Thats “only” “relays” with multiple connections to your relay? Interesting to see Hetzner there …
Markus
On 22. Dec 2017, at 16:14, Tyler Johnson tylrcjhnsn@gmail.com wrote:
Out off 133 IPs blocked with my rather aggressive firewall ruleset:
leaseweb.com - 26 your-server.de - 66 ip-54-36-51.eu - 17
That was in < 24hrs.
On Dec 22, 2017 3:38 AM, "niftybunny" abuse@to-surf-and-protect.net wrote:
Short answer:
https://i.imgur.com/8QLptcz.png
Around 15000 - 18000 connections I can see with netstat. Even my 300 mbit exit has less and there a a lot of Leaseweb clients connecting to me ... The interesting thing is, it comes and goes in waves. From 6000 (normal) to 20000 connections within an hour. Someone doesn't like me very much :(
Markus
On 22. Dec 2017, at 08:42, Felix zwiebel@quantentunnel.de wrote:
Am 22-Dec-17 um 08:25 schrieb niftybunny:
Still under heavy attack even with the MaxMemInQueues and 0.3.2.8-rc. I need 2 xeons to push 30 mbit as a guard/middle …
Do you want to share some information:
Type i) (memory exhaustion by too many circuits) What is the memory(top) per tor and its MaxMemInQueues ? How many circuits per hour in log ?
Type ii) (cpu exhaustion by too many 'half open' tor connections) Is your number of open files normal (fw in place) and moderate connection counts per remote IP ?
Type iii) (One fills your server with too many long fat pipes, first ACK and RTT) If on Freebsd, is "mbuf clusters in use" (netstat -m) moderate ? Do you get "kern.ipc.nmbclusters limit reached" in messages ?
-- Cheers, Felix
I thought I'd better report this event, as it occurred shortly after upgrading to 3.2.8-rc.
Regarding: BF735F669481EE1CCC348F0731551C933D1E2278
This relay ran 3.2.6-rc through the initial DOS and did not appear to be involved/affected. It was upgraded to 3.2.8-rc yesterday around 0400 UTC, restarted and appeared to resume operation (same level of traffic) without difficulty.
This morning, logs reveal that a circuit building "storm" commenced around 1000 UTC yesterday. Circuit numbers rose from a typical 23-25K to 160-210K and remained in that range until 1600 UTC, about 30 minutes ago. CPU use was pinned and RAM was intermittently exhausted during this time, but there's no indication the relay went down. Network traffic did not increase significantly during this "storm". Circuit numbers have returned to typical values.
This relay runs on Debian 8 Stable in a AS provided VM with 1GB of RAM. Typical CPU load is ~40%, and typical RAM use is ~50%.
Rick
On Thu, Dec 21, 2017 at 10:11:47PM +0100, Felix wrote: My current thought is that these are actually Tor clients, not intentional denial-of-service attacks, but there are millions of them so they are producing surprises and damage. (Also, maybe there is not a human behind each of the Tor clients, so maybe we shouldn't value them as much as we would value more Tor Browser users.)
I've started the process of cranking down the extra circuits that new clients make: https://trac.torproject.org/24716
With luck, over the next day or so things will get better. We'll learn something about the issue either way.
Keep an eye on your "Circuit handshake stats since last time" notice-level log lines over the next day or two.
(This won't resolve the "way too many connections" issue though. One step at a time. :)
--Roger
Hi,
You can block inbound connections if you like, but it's only a partial mitigation for the attack.
On 22 Dec 2017, at 06:42, mick mbm@rlogin.net wrote:
So: My logs show Tor staying up for around 10 minutes at a time before rebooting with the following sort of entries: ... Dec 21 16:35:20.951 [notice] Based on detected system memory, MaxMemInQueues is set to 369 MB. You can override this by setting MaxMemInQueues by hand.
Please set MaxMemInQueues to the amount of free RAM available to Tor, minus a few hundred megabytes for other data structures.
Please also increase the number of file descriptors available to Tor, if possible on your system.
T
Hi
I've implemented following mitigations: * limit memory in queues. For my system that's a safe yet large enough setting (2gb system mem, current usage around 320mb). MaxMemInQueues 768 MB
* connlimit: both count & rate. Although, based on observations, only the rate limit is actually being hit, and then only for the reported suspect networks (see below on counts per /24).
-A INPUT -p tcp -m multiport --dports 9080,9443 -m connlimit --connlimit-upto 360 --connlimit-mask 24 -m hashlimit --hashlimit-upto 20/second --hashlimit-mode srcip --hashlimit-srcmask 24 --hashlimit-name mask24 -j ACCEPT
With these settings, there are no stability issues.
Regards
Frequent offenders of rate limit are (to name a few): 37.48.86.* 51.15.161.* 95.211.95.* 212.32.226.* 199.115.112.* 213.227.137.* 54.36.51.*
On Thu, 21 Dec 2017 at 21:21 mick mbm@rlogin.net wrote:
On Wed, 20 Dec 2017 17:22:54 +0100 fcornu@wardsback.org allegedly wrote:
Hi
I'm the happy maintainer of wardsback : B143D439B72D239A419F8DCE07B8A8EB1B486FA7
And I run 0xbaddad - EA8637EA746451C0680559FDFF34ABA54DDAE831 a guard (though whether it stays a guard depends. It keeps falling over.)
As many of us have noticed, many guard nodes are beeing abused by extremely high numbers of connection attempts. Thanks to some of you guys, I manged to put some mitigation in place [0] and I assume many of us did as well.
I'm still looking at mitigation. I'd rather not add iptables filter rules because it feels like the wrong thing to do (I might hurt legitimate connections) and at the wrong end of the stack. I'd prefer there to be mitigations available at the application end (Tor itself). But realistically I know that that is difficult and the Tor developer team are still working hard at this problem. (As an aside, I'd be very grateful for any feedback from other relay operators who /have/ added iptables "connlimit" rules. What is your view either way?)
I'm only sticking my head above the parapet now to note what I am seeing.
So: My logs show Tor staying up for around 10 minutes at a time before rebooting with the following sort of entries:
Dec 21 16:25:44.000 [notice] Performing bandwidth self-test...done. Dec 21 16:35:20.000 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) opening log file. Dec 21 16:35:20.946 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0f, Zlib 1.2.8, Liblzma 5.2.2, and Libzstd 1.1.2. Dec 21 16:35:20.947 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Dec 21 16:35:20.947 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc". Dec 21 16:35:20.947 [notice] Read configuration file "/etc/tor/torrc". Dec 21 16:35:20.951 [notice] Based on detected system memory, MaxMemInQueues is set to 369 MB. You can override this by setting MaxMemInQueues by hand. Dec 21 16:35:20.952 [notice] Opening Control listener on 127.0.0.1:9051 Dec 21 16:35:20.953 [notice] Opening OR listener on 0.0.0.0:9001 Dec 21 16:35:20.000 [notice] Not disabling debugger attaching for unprivileged users. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip. Dec 21 16:35:21.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Dec 21 16:35:22.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now. Dec 21 16:35:22.000 [notice] Your Tor server's identity key fingerprint is '0xbaddad EA8637EA746451C0680559FDFF34ABA54DDAE831' Dec 21 16:35:22.000 [notice] Bootstrapped 0%: Starting Dec 21 16:35:31.000 [notice] Starting with guard context "default" Dec 21 16:35:31.000 [notice] Bootstrapped 80%: Connecting to the Tor network Dec 21 16:35:31.000 [notice] Signaled readiness to systemd Dec 21 16:35:31.000 [notice] Opening Control listener on /var/run/tor/control Dec 21 16:35:31.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 10; recommendation warn; host CD14AE63A02686BAE838A8079449B480801A8A5F at 195.181.208.180:443) Dec 21 16:35:32.000 [warn] 9 connections have failed: Dec 21 16:35:32.000 [warn] 9 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 11; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 10 connections have failed: Dec 21 16:35:32.000 [warn] 10 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 12; recommendation warn; host 3DE7762DD6165FD70C74BD02A6589C8C0C1B020A at 62.210.76.88:9001) Dec 21 16:35:32.000 [warn] 11 connections have failed: Dec 21 16:35:32.000 [warn] 11 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 13; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 12 connections have failed: Dec 21 16:35:32.000 [warn] 12 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 14; recommendation warn; host 51939625169E2C7E0DC83D38BAE628BDE67E9A22 at 109.236.90.209:443) Dec 21 16:35:32.000 [warn] 13 connections have failed: Dec 21 16:35:32.000 [warn] 13 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 15; recommendation warn; host 500FE4D6B529855A2F95A0CB34F2A10D5889E8C1 at 134.19.177.109:443) Dec 21 16:35:32.000 [warn] 14 connections have failed: Dec 21 16:35:32.000 [warn] 14 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [warn] Problem bootstrapping. Stuck at 85%: Finishing handshake with first hop. (Connection refused; CONNECTREFUSED; count 16; recommendation warn; host 03DC081E4409631006EFCD3AF13AFAAF2B553FFC at 185.32.221.201:443) Dec 21 16:35:32.000 [warn] 15 connections have failed: Dec 21 16:35:32.000 [warn] 15 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:32.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 17; recommendation warn; host 1FA8F638298645BE58AC905276680889CB795A94 at 185.129.249.124:9001) Dec 21 16:35:33.000 [warn] 16 connections have failed: Dec 21 16:35:33.000 [warn] 16 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [warn] Problem bootstrapping. Stuck at 90%: Establishing a Tor circuit. (Connection refused; CONNECTREFUSED; count 18; recommendation warn; host DAC825BBF05D678ABDEA1C3086E8D99CF0BBF112 at 185.73.220.8:443) Dec 21 16:35:33.000 [warn] 17 connections have failed: Dec 21 16:35:33.000 [warn] 17 connections died in state connect()ing with SSL state (No SSL object) Dec 21 16:35:33.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Dec 21 16:35:33.000 [notice] Bootstrapped 100%: Done
So - I get loads of CONNECTREFUSED whilst coming up (presumably because of the attack) and then come fully back online. "netstat" then shows my connections rising rapidly to around the 10,000-11,000 "ESTABLISHED" mark before it all goes wrong again.
As others have noted I see multiple connections from OVH (netblock 54.36.51/24 (around 1200, when I normally only see a max of 200 or so per /24, and a more normal dozen or so per /24). The next largest, at around 700-800 is 144.76.175/24 (Hetzner Online). I don't recall seeing that level of connections in the past.
If anyone wants more info, let me know.
Best
Mick
Mick Morgan gpg fingerprint: FC23 3338 F664 5E66 876B 72C0 0A1F E60B 5BAD D312 http://baldric.net
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays@lists.torproject.org