I've been seeing these storms as well on my relay. I average something like 100 connections for weeks and weeks per the tor logs, but then suddenly it will jump into the thousands and I'll see the "Failed to hand off onionskin." and "Your computer is too slow to handle this many circuit creation requests!" messages.
I wonder if it's some type of DDOS too.
I thought about using this method (http://www.debian-administration.org/articles/187) on the relay and dir ports, but I'm not sure what sort of limits to set. Like does 1 Tor circuit = 1 iptables connection? Or if a user hits a webpage with 100 ads on it, maybe it would be 1 Tor circuit = 100 iptables connections?
That's about as far as I got. I didn't want to break things by trying to fix another.
I did a lot of tuning on the Raspberry Pi and it's now much, much more stable as a Tor relay, but just now I had another "circuit creation storm." Interestingly, the Pi remained up, and my *router* crashed. I've also seen huge bursts of circuit creation on a relay I run on a VPS, but as it's a much more powerful box it rarely complains (and thus I rarely notice).
This many circuits and outbound connections is highly unusual for the small relay I'm running on the Pi. And this behavior definitely occurs in bursts. Is this an outbound DDOS or an attack on Tor itself? If the former (or maybe the latter), is there some way I could perhaps use iptables to temporarily "clamp" the ability to open TCP connections when Tor (or anything on the Pi) opens a number over some threshold in some short period of time?
Here's log output (via 'arm') from the relay after my router crashed twice, I went to the admin panel and noted hundreds of outbound connections from my Tor box. Time is America/Los_Angeles.
│ 13:55:00 [ARM_NOTICE] Relay unresponsive (last heartbeat: Sat May 4 13:54:14 2013) │ 13:52:25 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [404 similar message(s) suppressed in last 60 seconds] │ 13:51:07 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [75 similar message(s) suppressed in last 60 seconds] │ 13:50:52 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [601 similar message(s) suppressed in last 60 seconds] │ 13:48:39 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [99 similar message(s) suppressed in last 60 seconds] │ 13:47:34 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [22 similar message(s) suppressed in last 60 seconds] │ 13:46:17 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [253 similar message(s) suppressed in last 60 seconds] │ 13:43:47 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [1396 similar message(s) suppressed in last 60 │ seconds] │ 13:42:48 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [16 similar message(s) suppressed in last 60 seconds]
Here's how it crashed my router (blowing ip_conntrack limits is sufficient only to mess up many of my TCP connections, but eventually the router runs out of memory and starts killing processes):
May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:25 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:29 dedmaus user.warn kernel: NET: 152 messages suppressed. May 4 13:51:29 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:34 dedmaus user.warn kernel: NET: 193 messages suppressed. May 4 13:51:34 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:39 dedmaus user.warn kernel: NET: 227 messages suppressed.
...ad infinitum with the number of messages suppressed per 5 sec increasing until the router crashes.
On Mon, Mar 18, 2013, at 06:18 PM, torsion at ftml.net wrote:
I'm also seeing occasional messages like this on the Pi (it never lasts long):
18:13:24 [ARM_NOTICE] Relay resumed 18:13:18 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 18:13:04 2013) 17:28:43 [ARM_NOTICE] Relay resumed 17:28:38 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 17:28:25 2013) 14:12:26 [ARM_NOTICE] Relay resumed 14:12:20 [ARM_WARN] Deduplication took too long. Its current implementation has difficulty handling large logs so disabling it to keep the interface responsive. 14:12:20 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 14:12:06 20
On Mon, Mar 18, 2013, at 01:00 PM, torsion at ftml.net wrote:
Hi there, I just joined the mailing list and apologized if this has
been
discussed before. I did find discussion of a similar issue in January 2013's archive:
https://lists.torproject.org/pipermail/tor-relays/2013-January/001809.html
It's important to note that I believe I've seen (but didn't save logs)
a
couple "circuit creation burst" events on my established relay (about 5Mbps, stable, guard, non-exit) which was mostly able to handle it without crashing as it has plenty of RAM and the above-mentioned messages - "Your computer is too slow to handle this many circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a m ore restricted exit policy." - appear
only
with the relay is under load for other reasons AND a large number of circuits are being suddenly created.
I wondered if this was some kind of DOS attempt but didn't think much
of
it because my fast relay continued working fine.
However, I've just set up a Raspberry Pi, the 512MB model, as a relay
on
a slower connection. Here are the relevant settings on this relay:
RelayBandwidthRate 130 KB RelayBandwidthBurst 340 KB
The Pi has a fairly slow CPU, so I'd occasionally get messages about
log
deduplication being too slow or something, but didn't think much of
it.
I finally got the relay up and left it up for over 24 hours. When I woke up this morning it had crashed. Here are the relevant log
messages
- note the huge jump in number of circuits between 22:35 and 04:35
(maybe I got the Stable flag), then the storm of circuit open requests starting at 05:49. Eventually I believe the Pi ran out of memory and killed the tor process.
What's very interesting here is that my fast VPS relay with a RelayBandwidthRate over 5x faster is almost never handling much more than 1000 circuits, so why all of a sudden the demand on the Pi that's advertising a lower bandwidth rate?
Mar 17 22:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 0:00 hours, with 26 circuits open. I've sent 974.13 MB and received 969.92 MB. Mar 18 04:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 6:00 hours, with 972 circuits open. I've sent 1.61 GB and received 1.59 GB. Mar 18 05:49:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. Mar 18 05:49:44.000 [warn] Failed to hand off onionskin. Closing. Mar 18 05:50:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [5817 similar message(s) suppressed in last 60 seconds] Mar 18 05:52:30.000 [warn] Your system clock just jumped 101 seconds forward; assuming established circuits no longer work. Mar 18 05:53:51.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [1055 similar message(s) suppressed in last 60 seconds] Mar 18 05:55:14.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [329 similar message(s) suppressed in last 60 seconds]
I'd like to figure out just how much the Raspberry Pi is capable of, because it could be a cheap way to build out the relay network by
people
who want to donate bandwidth - but of course it needs to be stable,
and
something about my setup is not.
Also:
Mar 16 20:55:33.000 [notice] No AES engine found; using AES_*
functions.
I have no idea if the Broadcom BCM2835 SoC (ARM1176JZF-S CPU) in the
Pi
has any AES capability, but it'd be great to find out.
I'm also having some problems with my rpi node going down every few days due to lack of resouces and needing a reset. Can you mail me with some of the alterations you made which might make it more stable? Thanks. T On Jun 5, 2013 10:42 AM, temp5@tormail.org wrote:
I've been seeing these storms as well on my relay. I average something like 100 connections for weeks and weeks per the tor logs, but then suddenly it will jump into the thousands and I'll see the "Failed to hand off onionskin." and "Your computer is too slow to handle this many circuit creation requests!" messages.
I wonder if it's some type of DDOS too.
I thought about using this method (http://www.debian-administration.org/articles/187) on the relay and dir ports, but I'm not sure what sort of limits to set. Like does 1 Tor circuit = 1 iptables connection? Or if a user hits a webpage with 100 ads on it, maybe it would be 1 Tor circuit = 100 iptables connections?
That's about as far as I got. I didn't want to break things by trying to fix another.
I did a lot of tuning on the Raspberry Pi and it's now much, much more stable as a Tor relay, but just now I had another "circuit creation storm." Interestingly, the Pi remained up, and my *router* crashed. I've also seen huge bursts of circuit creation on a relay I run on a VPS, but as it's a much more powerful box it rarely complains (and thus I rarely notice).
This many circuits and outbound connections is highly unusual for the small relay I'm running on the Pi. And this behavior definitely occurs in bursts. Is this an outbound DDOS or an attack on Tor itself? If the former (or maybe the latter), is there some way I could perhaps use iptables to temporarily "clamp" the ability to open TCP connections when Tor (or anything on the Pi) opens a number over some threshold in some short period of time?
Here's log output (via 'arm') from the relay after my router crashed twice, I went to the admin panel and noted hundreds of outbound connections from my Tor box. Time is America/Los_Angeles.
│ 13:55:00 [ARM_NOTICE] Relay unresponsive (last heartbeat: Sat May 4 13:54:14 2013) │ 13:52:25 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [404 similar message(s) suppressed in last 60 seconds] │ 13:51:07 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [75 similar message(s) suppressed in last 60 seconds] │ 13:50:52 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [601 similar message(s) suppressed in last 60 seconds] │ 13:48:39 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [99 similar message(s) suppressed in last 60 seconds] │ 13:47:34 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [22 similar message(s) suppressed in last 60 seconds] │ 13:46:17 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [253 similar message(s) suppressed in last 60 seconds] │ 13:43:47 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [1396 similar message(s) suppressed in last 60 │ seconds] │ 13:42:48 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [16 similar message(s) suppressed in last 60 seconds]
Here's how it crashed my router (blowing ip_conntrack limits is sufficient only to mess up many of my TCP connections, but eventually the router runs out of memory and starts killing processes):
May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:25 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:29 dedmaus user.warn kernel: NET: 152 messages suppressed. May 4 13:51:29 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:34 dedmaus user.warn kernel: NET: 193 messages suppressed. May 4 13:51:34 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:39 dedmaus user.warn kernel: NET: 227 messages suppressed.
...ad infinitum with the number of messages suppressed per 5 sec increasing until the router crashes.
On Mon, Mar 18, 2013, at 06:18 PM, torsion at ftml.net wrote:
I'm also seeing occasional messages like this on the Pi (it never lasts long):
18:13:24 [ARM_NOTICE] Relay resumed 18:13:18 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 18:13:04 2013) 17:28:43 [ARM_NOTICE] Relay resumed 17:28:38 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 17:28:25 2013) 14:12:26 [ARM_NOTICE] Relay resumed 14:12:20 [ARM_WARN] Deduplication took too long. Its current implementation has difficulty handling large logs so disabling it to keep the interface responsive. 14:12:20 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 14:12:06 20
On Mon, Mar 18, 2013, at 01:00 PM, torsion at ftml.net wrote:
Hi there, I just joined the mailing list and apologized if this has
been
discussed before. I did find discussion of a similar issue in January 2013's archive:
https://lists.torproject.org/pipermail/tor-relays/2013-January/001809.html
It's important to note that I believe I've seen (but didn't save logs)
a
couple "circuit creation burst" events on my established relay (about 5Mbps, stable, guard, non-exit) which was mostly able to handle it without crashing as it has plenty of RAM and the above-mentioned messages - "Your computer is too slow to handle this many circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a m ore restricted exit policy." - appear
only
with the relay is under load for other reasons AND a large number of circuits are being suddenly created.
I wondered if this was some kind of DOS attempt but didn't think much
of
it because my fast relay continued working fine.
However, I've just set up a Raspberry Pi, the 512MB model, as a relay
on
a slower connection. Here are the relevant settings on this relay:
RelayBandwidthRate 130 KB RelayBandwidthBurst 340 KB
The Pi has a fairly slow CPU, so I'd occasionally get messages about
log
deduplication being too slow or something, but didn't think much of
it.
I finally got the relay up and left it up for over 24 hours. When I woke up this morning it had crashed. Here are the relevant log
messages
- note the huge jump in number of circuits between 22:35 and 04:35
(maybe I got the Stable flag), then the storm of circuit open requests starting at 05:49. Eventually I believe the Pi ran out of memory and killed the tor process.
What's very interesting here is that my fast VPS relay with a RelayBandwidthRate over 5x faster is almost never handling much more than 1000 circuits, so why all of a sudden the demand on the Pi that's advertising a lower bandwidth rate?
Mar 17 22:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 0:00 hours, with 26 circuits open. I've sent 974.13 MB and received 969.92 MB. Mar 18 04:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 6:00 hours, with 972 circuits open. I've sent 1.61 GB and received 1.59 GB. Mar 18 05:49:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. Mar 18 05:49:44.000 [warn] Failed to hand off onionskin. Closing. Mar 18 05:50:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [5817 similar message(s) suppressed in last 60 seconds] Mar 18 05:52:30.000 [warn] Your system clock just jumped 101 seconds forward; assuming established circuits no longer work. Mar 18 05:53:51.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [1055 similar message(s) suppressed in last 60 seconds] Mar 18 05:55:14.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [329 similar message(s) suppressed in last 60 seconds]
I'd like to figure out just how much the Raspberry Pi is capable of, because it could be a cheap way to build out the relay network by
people
who want to donate bandwidth - but of course it needs to be stable,
and
something about my setup is not.
Also:
Mar 16 20:55:33.000 [notice] No AES engine found; using AES_*
functions.
I have no idea if the Broadcom BCM2835 SoC (ARM1176JZF-S CPU) in the
Pi
has any AES capability, but it'd be great to find out.
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Hi, out camping far from the Pi and much data coverage. I will post when I get back. I'm hoping somebody could pique the interest of a Tor developer about these circuit creation bursts, so far I haven't had success - hope this thread keeps going.
On Wed, Jun 5, 2013, at 10:08 AM, Thomas Hand wrote:
I'm also having some problems with my rpi node going down every few days due to lack of resouces and needing a reset. Can you mail me with some of the alterations you made which might make it more stable? Thanks. T On Jun 5, 2013 10:42 AM, temp5@tormail.org wrote:
I've been seeing these storms as well on my relay. I average something like 100 connections for weeks and weeks per the tor logs, but then suddenly it will jump into the thousands and I'll see the "Failed to hand off onionskin." and "Your computer is too slow to handle this many circuit creation requests!" messages.
I wonder if it's some type of DDOS too.
I thought about using this method (http://www.debian-administration.org/articles/187) on the relay and dir ports, but I'm not sure what sort of limits to set. Like does 1 Tor circuit = 1 iptables connection? Or if a user hits a webpage with 100 ads on it, maybe it would be 1 Tor circuit = 100 iptables connections?
That's about as far as I got. I didn't want to break things by trying to fix another.
I did a lot of tuning on the Raspberry Pi and it's now much, much more stable as a Tor relay, but just now I had another "circuit creation storm." Interestingly, the Pi remained up, and my *router* crashed. I've also seen huge bursts of circuit creation on a relay I run on a VPS, but as it's a much more powerful box it rarely complains (and thus I rarely notice).
This many circuits and outbound connections is highly unusual for the small relay I'm running on the Pi. And this behavior definitely occurs in bursts. Is this an outbound DDOS or an attack on Tor itself? If the former (or maybe the latter), is there some way I could perhaps use iptables to temporarily "clamp" the ability to open TCP connections when Tor (or anything on the Pi) opens a number over some threshold in some short period of time?
Here's log output (via 'arm') from the relay after my router crashed twice, I went to the admin panel and noted hundreds of outbound connections from my Tor box. Time is America/Los_Angeles.
│ 13:55:00 [ARM_NOTICE] Relay unresponsive (last heartbeat: Sat May 4 13:54:14 2013) │ 13:52:25 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [404 similar message(s) suppressed in last 60 seconds] │ 13:51:07 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [75 similar message(s) suppressed in last 60 seconds] │ 13:50:52 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [601 similar message(s) suppressed in last 60 seconds] │ 13:48:39 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [99 similar message(s) suppressed in last 60 seconds] │ 13:47:34 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [22 similar message(s) suppressed in last 60 seconds] │ 13:46:17 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [253 similar message(s) suppressed in last 60 seconds] │ 13:43:47 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [1396 similar message(s) suppressed in last 60 │ seconds] │ 13:42:48 [WARN] Your computer is too slow to handle this many circuit creation │ requests! Please consider using the MaxAdvertisedBandwidth config option or choosing │ a more restricted exit policy. [16 similar message(s) suppressed in last 60 seconds]
Here's how it crashed my router (blowing ip_conntrack limits is sufficient only to mess up many of my TCP connections, but eventually the router runs out of memory and starts killing processes):
May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:24 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:25 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:29 dedmaus user.warn kernel: NET: 152 messages suppressed. May 4 13:51:29 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:34 dedmaus user.warn kernel: NET: 193 messages suppressed. May 4 13:51:34 dedmaus user.warn kernel: ip_conntrack: table full, dropping packet. May 4 13:51:39 dedmaus user.warn kernel: NET: 227 messages suppressed.
...ad infinitum with the number of messages suppressed per 5 sec increasing until the router crashes.
On Mon, Mar 18, 2013, at 06:18 PM, torsion at ftml.net wrote:
I'm also seeing occasional messages like this on the Pi (it never lasts long):
18:13:24 [ARM_NOTICE] Relay resumed 18:13:18 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 18:13:04 2013) 17:28:43 [ARM_NOTICE] Relay resumed 17:28:38 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 17:28:25 2013) 14:12:26 [ARM_NOTICE] Relay resumed 14:12:20 [ARM_WARN] Deduplication took too long. Its current implementation has difficulty handling large logs so disabling it to keep the interface responsive. 14:12:20 [ARM_NOTICE] Relay unresponsive (last heartbeat: Mon Mar 18 14:12:06 20
On Mon, Mar 18, 2013, at 01:00 PM, torsion at ftml.net wrote:
Hi there, I just joined the mailing list and apologized if this has
been
discussed before. I did find discussion of a similar issue in January 2013's archive:
https://lists.torproject.org/pipermail/tor-relays/2013-January/001809.html
It's important to note that I believe I've seen (but didn't save logs)
a
couple "circuit creation burst" events on my established relay (about 5Mbps, stable, guard, non-exit) which was mostly able to handle it without crashing as it has plenty of RAM and the above-mentioned messages - "Your computer is too slow to handle this many circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a m ore restricted exit policy." - appear
only
with the relay is under load for other reasons AND a large number of circuits are being suddenly created.
I wondered if this was some kind of DOS attempt but didn't think much
of
it because my fast relay continued working fine.
However, I've just set up a Raspberry Pi, the 512MB model, as a relay
on
a slower connection. Here are the relevant settings on this relay:
RelayBandwidthRate 130 KB RelayBandwidthBurst 340 KB
The Pi has a fairly slow CPU, so I'd occasionally get messages about
log
deduplication being too slow or something, but didn't think much of
it.
I finally got the relay up and left it up for over 24 hours. When I woke up this morning it had crashed. Here are the relevant log
messages
- note the huge jump in number of circuits between 22:35 and 04:35
(maybe I got the Stable flag), then the storm of circuit open requests starting at 05:49. Eventually I believe the Pi ran out of memory and killed the tor process.
What's very interesting here is that my fast VPS relay with a RelayBandwidthRate over 5x faster is almost never handling much more than 1000 circuits, so why all of a sudden the demand on the Pi that's advertising a lower bandwidth rate?
Mar 17 22:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 0:00 hours, with 26 circuits open. I've sent 974.13 MB and received 969.92 MB. Mar 18 04:35:00.000 [notice] Heartbeat: Tor's uptime is 1 day 6:00 hours, with 972 circuits open. I've sent 1.61 GB and received 1.59 GB. Mar 18 05:49:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. Mar 18 05:49:44.000 [warn] Failed to hand off onionskin. Closing. Mar 18 05:50:44.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [5817 similar message(s) suppressed in last 60 seconds] Mar 18 05:52:30.000 [warn] Your system clock just jumped 101 seconds forward; assuming established circuits no longer work. Mar 18 05:53:51.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [1055 similar message(s) suppressed in last 60 seconds] Mar 18 05:55:14.000 [warn] Your computer is too slow to handle this
many
circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted
exit
policy. [329 similar message(s) suppressed in last 60 seconds]
I'd like to figure out just how much the Raspberry Pi is capable of, because it could be a cheap way to build out the relay network by
people
who want to donate bandwidth - but of course it needs to be stable,
and
something about my setup is not.
Also:
Mar 16 20:55:33.000 [notice] No AES engine found; using AES_*
functions.
I have no idea if the Broadcom BCM2835 SoC (ARM1176JZF-S CPU) in the
Pi
has any AES capability, but it'd be great to find out.
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On Wed, Jun 05, 2013 at 09:20:02AM -0000, temp5@tormail.org wrote:
I've been seeing these storms as well on my relay. I average something like 100 connections for weeks and weeks per the tor logs, but then suddenly it will jump into the thousands and I'll see the "Failed to hand off onionskin." and "Your computer is too slow to handle this many circuit creation requests!" messages.
I wonder if it's some type of DDOS too.
The current theory is that these happen when your relay becomes the hidden service directory, or introduction point, for a popular hidden service.
So these are basically roving hotspots that move around the network. In the case of the hidden service directory the pain lasts about a day, and in the case of the introduction point, it lasts for some function of the duration of the introduction point (could be a while) and the time that the hidden service descriptor is fresh (15 minutes or so). Based on the logs here, it sounds like it might be the introduction point in these cases.
Here are some tickets to look at: https://trac.torproject.org/projects/tor/ticket/3825 https://trac.torproject.org/projects/tor/ticket/4862 https://trac.torproject.org/projects/tor/ticket/8950
Also, the switch to the new ntor circuit-level handshake should reduce the cpu requirements for create cells (in addition to being more secure). So once more people have switched to ntor, these hotspots shouldn't be so bad. It is unclear if that's the same as 'shouldn't be bad'. :)
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/216-ntor-hand...
--Roger
tor-relays@lists.torproject.org