Dear Tor developers,
[Please CC me in replies, I am not currently subscribed to tor-dev.]
Context: At the Institute of Networks and Security at Johannes Kepler University Linz, we have been hosting Austria's fastest exit node for the last ca. 9 months. It used to be listed as https://atlas.torproject.org/#details/01A9258A46E97FF8B2CAC7910577862C14F2C5... until very recently, and we tried to find out what went wrong when we saw traffic drop sharply a bit over a week ago. Unfortunately, two out of three people responsible for running this node were on holidays, so we could only start investigating today.
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
Problem: This worked nicely with Tor 0.2.5.12-1 on Debian Jessie. We upgraded about two weeks ago to 0.2.8.7-1 from the Tor apt repositories (mostly in response to https://blog.torproject.org/blog/tor-0287-released-important-fixes as a wakeup call that we were using old versions from Debian main). At first, it seemed to work well enough, but then the holidays came and we didn't actively watch it for the next week.... Now with 0.2.8.7-1, the traffic sent to our node started declining until it vanished completely. After a bit of debugging and rolling back to 0.2.5.12-1 (which is now active on our node as of a few hours ago, slowly approaching the 200MBit/s again), it seems that we discovered a regression concerning the handling of sockets. I can best summarize it with the relevant torrc config options and startup log lines from both versions:
root@tor2 ~ # grep 193.171.202 /etc/tor/torrc ORPort 193.171.202.146:9001 ORPort 193.171.202.146:443 OutboundBindAddress 193.171.202.150 DirPort 193.171.202.146:9030
Sep 19 11:37:41.000 [notice] Tor 0.2.8.7 (git-cc2f02ef17899f86) opening log file. Sep 19 11:37:41.194 [notice] Tor v0.2.8.7 (git-cc2f02ef17899f86) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1t and Zlib 1.2.8. Sep 19 11:37:41.194 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Sep 19 11:37:41.194 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc". Sep 19 11:37:41.194 [notice] Read configuration file "/etc/tor/torrc". Sep 19 11:37:41.197 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason. Sep 19 11:37:41.198 [notice] Based on detected system memory, MaxMemInQueues is set to 2961 MB. You can override this by setting MaxMemInQueues by hand. Sep 19 11:37:41.198 [warn] Tor is running as an exit relay. If you did not want this behavior, please set the ExitRelay option to 0. If you do want to run an exit Relay, please set the ExitRelay option to 1 to disable this warning, and for forward compatibility. Sep 19 11:37:41.198 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason. Sep 19 11:37:41.199 [notice] Opening Socks listener on 0.0.0.0:9050 Sep 19 11:37:41.199 [notice] Opening Control listener on 127.0.0.1:9051 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:9001 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:443 Sep 19 11:37:41.199 [notice] Opening Directory listener on 193.171.202.146:9030 Sep 19 11:37:41.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip. Sep 19 11:37:41.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Sep 19 11:37:41.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 11:37:41.000 [notice] Bootstrapped 0%: Starting Sep 19 11:37:49.000 [notice] Bootstrapped 80%: Connecting to the Tor network Sep 19 11:37:49.000 [notice] Signaled readiness to systemd Sep 19 11:37:50.000 [notice] Opening Control listener on /var/run/tor/control Sep 19 11:37:51.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Sep 19 11:37:51.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Sep 19 11:37:51.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Sep 19 11:37:51.000 [notice] Bootstrapped 100%: Done Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 11:38:30.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Sep 19 11:38:34.000 [warn] You specified a server "ins1" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$CD9FD887A4572D46938640BA65F258851F1E418B". Sep 19 11:38:34.000 [warn] You specified a server "ins2" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$7C3AF46F77445A0B1E903A5AF5B730A05F127BFC". Sep 19 11:40:18.000 [notice] Performing bandwidth self-test...done. Sep 19 11:57:50.000 [warn] Your server (193.171.202.150:9030) has not managed to confirm that its DirPort is reachable. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc. Sep 19 12:00:27.000 [notice] Interrupt: we have stopped accepting new connections, and will shut down in 30 seconds. Interrupt again to exit now. Sep 19 12:00:57.000 [notice] Clean shutdown finished. Exiting. Sep 19 12:01:48.000 [notice] Tor 0.2.5.12 (git-3731dd5c3071dcba) opening log file. Sep 19 12:01:48.000 [notice] Configured to measure directory request statistics, but no GeoIP database found. Please specify a GeoIP database using the GeoIPFile option. Sep 19 12:01:48.000 [notice] Caching new entry debian-tor for debian-tor Sep 19 12:01:48.000 [notice] Caching new entry debian-tor for debian-tor Sep 19 12:01:48.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 12:01:48.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 12:01:48.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 12:01:48.000 [notice] Bootstrapped 0%: Starting Sep 19 12:01:48.000 [notice] Bootstrapped 5%: Connecting to directory server Sep 19 12:01:51.000 [notice] We now have enough directory information to build circuits. Sep 19 12:01:51.000 [notice] Bootstrapped 80%: Connecting to the Tor network Sep 19 12:01:51.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Sep 19 12:01:52.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Sep 19 12:01:52.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Sep 19 12:01:52.000 [notice] Bootstrapped 100%: Done Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 12:01:53.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Sep 19 12:01:53.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor. Sep 19 12:01:55.000 [notice] Performing bandwidth self-test...done.
Please note the difference (0.2.8.7): Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) vs. (0.2.5.12): Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
I.e. 0.2.8.7 does not seem to honor the address the socket is bound to when starting the reachability checks from the outside (it seems to use the address that either the default route is associated with or the OutboundBindAddress) - although the socket binding itself is done correctly (i.e. the netstat output is exactly the same for both versions, with tor binding to the specific IP address only for the Dir and both OR ports). Consequently, the node is declared as non-reachable and drops off the globe/atlas...
Has this change been intentional? I have to admit we have not yet checked the source code for further debugging, as we wanted to get the node back up as quickly as possible (after our unfortunately timed holidays, sorry for that).
with best regards, Rene
Update: After a hint by Peter Palfrader, I now set the Address option as well:
root@tor2 ~ # grep Address /etc/tor/torrc Address 193.171.202.146 OutboundBindAddress 193.171.202.150
This seems to work with 0.2.8.7-1, so we should be up and running with a recent version now. However, we did not set Address before, exactly because we have two addresses assigned for the Tor exit node on this host (as opposed to being behind a NAT gateway with port forwarding with different internal and externally visible addresses, which might be the more common case). Is a setup like ours supported with explicitly setting Address?
Thanks, Rene
On 2016-09-19 15:14, René Mayrhofer wrote:
Dear Tor developers,
[Please CC me in replies, I am not currently subscribed to tor-dev.]
Context: At the Institute of Networks and Security at Johannes Kepler University Linz, we have been hosting Austria's fastest exit node for the last ca. 9 months. It used to be listed as https://atlas.torproject.org/#details/01A9258A46E97FF8B2CAC7910577862C14F2C5... until very recently, and we tried to find out what went wrong when we saw traffic drop sharply a bit over a week ago. Unfortunately, two out of three people responsible for running this node were on holidays, so we could only start investigating today.
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
Problem: This worked nicely with Tor 0.2.5.12-1 on Debian Jessie. We upgraded about two weeks ago to 0.2.8.7-1 from the Tor apt repositories (mostly in response to https://blog.torproject.org/blog/tor-0287-released-important-fixes as a wakeup call that we were using old versions from Debian main). At first, it seemed to work well enough, but then the holidays came and we didn't actively watch it for the next week.... Now with 0.2.8.7-1, the traffic sent to our node started declining until it vanished completely. After a bit of debugging and rolling back to 0.2.5.12-1 (which is now active on our node as of a few hours ago, slowly approaching the 200MBit/s again), it seems that we discovered a regression concerning the handling of sockets. I can best summarize it with the relevant torrc config options and startup log lines from both versions:
root@tor2 ~ # grep 193.171.202 /etc/tor/torrc ORPort 193.171.202.146:9001 ORPort 193.171.202.146:443 OutboundBindAddress 193.171.202.150 DirPort 193.171.202.146:9030
Sep 19 11:37:41.000 [notice] Tor 0.2.8.7 (git-cc2f02ef17899f86) opening log file. Sep 19 11:37:41.194 [notice] Tor v0.2.8.7 (git-cc2f02ef17899f86) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1t and Zlib 1.2.8. Sep 19 11:37:41.194 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Sep 19 11:37:41.194 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc". Sep 19 11:37:41.194 [notice] Read configuration file "/etc/tor/torrc". Sep 19 11:37:41.197 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason. Sep 19 11:37:41.198 [notice] Based on detected system memory, MaxMemInQueues is set to 2961 MB. You can override this by setting MaxMemInQueues by hand. Sep 19 11:37:41.198 [warn] Tor is running as an exit relay. If you did not want this behavior, please set the ExitRelay option to 0. If you do want to run an exit Relay, please set the ExitRelay option to 1 to disable this warning, and for forward compatibility. Sep 19 11:37:41.198 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason. Sep 19 11:37:41.199 [notice] Opening Socks listener on 0.0.0.0:9050 Sep 19 11:37:41.199 [notice] Opening Control listener on 127.0.0.1:9051 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:9001 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:443 Sep 19 11:37:41.199 [notice] Opening Directory listener on 193.171.202.146:9030 Sep 19 11:37:41.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip. Sep 19 11:37:41.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6. Sep 19 11:37:41.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 11:37:41.000 [notice] Bootstrapped 0%: Starting Sep 19 11:37:49.000 [notice] Bootstrapped 80%: Connecting to the Tor network Sep 19 11:37:49.000 [notice] Signaled readiness to systemd Sep 19 11:37:50.000 [notice] Opening Control listener on /var/run/tor/control Sep 19 11:37:51.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Sep 19 11:37:51.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Sep 19 11:37:51.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Sep 19 11:37:51.000 [notice] Bootstrapped 100%: Done Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 11:38:30.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Sep 19 11:38:34.000 [warn] You specified a server "ins1" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$CD9FD887A4572D46938640BA65F258851F1E418B". Sep 19 11:38:34.000 [warn] You specified a server "ins2" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$7C3AF46F77445A0B1E903A5AF5B730A05F127BFC". Sep 19 11:40:18.000 [notice] Performing bandwidth self-test...done. Sep 19 11:57:50.000 [warn] Your server (193.171.202.150:9030) has not managed to confirm that its DirPort is reachable. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc. Sep 19 12:00:27.000 [notice] Interrupt: we have stopped accepting new connections, and will shut down in 30 seconds. Interrupt again to exit now. Sep 19 12:00:57.000 [notice] Clean shutdown finished. Exiting. Sep 19 12:01:48.000 [notice] Tor 0.2.5.12 (git-3731dd5c3071dcba) opening log file. Sep 19 12:01:48.000 [notice] Configured to measure directory request statistics, but no GeoIP database found. Please specify a GeoIP database using the GeoIPFile option. Sep 19 12:01:48.000 [notice] Caching new entry debian-tor for debian-tor Sep 19 12:01:48.000 [notice] Caching new entry debian-tor for debian-tor Sep 19 12:01:48.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 12:01:48.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 12:01:48.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 12:01:48.000 [notice] Bootstrapped 0%: Starting Sep 19 12:01:48.000 [notice] Bootstrapped 5%: Connecting to directory server Sep 19 12:01:51.000 [notice] We now have enough directory information to build circuits. Sep 19 12:01:51.000 [notice] Bootstrapped 80%: Connecting to the Tor network Sep 19 12:01:51.000 [notice] Bootstrapped 85%: Finishing handshake with first hop Sep 19 12:01:52.000 [notice] Bootstrapped 90%: Establishing a Tor circuit Sep 19 12:01:52.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working. Sep 19 12:01:52.000 [notice] Bootstrapped 100%: Done Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 12:01:53.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Sep 19 12:01:53.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor. Sep 19 12:01:55.000 [notice] Performing bandwidth self-test...done.
Please note the difference (0.2.8.7): Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) vs. (0.2.5.12): Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
I.e. 0.2.8.7 does not seem to honor the address the socket is bound to when starting the reachability checks from the outside (it seems to use the address that either the default route is associated with or the OutboundBindAddress) - although the socket binding itself is done correctly (i.e. the netstat output is exactly the same for both versions, with tor binding to the specific IP address only for the Dir and both OR ports). Consequently, the node is declared as non-reachable and drops off the globe/atlas...
Has this change been intentional? I have to admit we have not yet checked the source code for further debugging, as we wanted to get the node back up as quickly as possible (after our unfortunately timed holidays, sorry for that).
with best regards, Rene
tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
René Mayrhofer schrieb am Montag, dem 19. September 2016:
Update: After a hint by Peter Palfrader, I now set the Address option as well:
root@tor2 ~ # grep Address /etc/tor/torrc Address 193.171.202.146 OutboundBindAddress 193.171.202.150
This seems to work with 0.2.8.7-1, so we should be up and running with a recent version now. However, we did not set Address before, exactly because we have two addresses assigned for the Tor exit node on this host (as opposed to being behind a NAT gateway with port forwarding with different internal and externally visible addresses, which might be the more common case). Is a setup like ours supported with explicitly setting Address?
The address set via the Address configuration option is the one that you want to see published in the server descriptor. If it's not set, then tor will start guessing, and in your case the guess went wrong [1], or at least changed.
I have Address set even on my non-multihomed, single-IP-address relays.
Cheers, Peter
[1] And that may be a regression; I haven't looked at that part.
Hi René,
Sorry that the upgrade to 0.2.8 has caused problems for you.
Thanks for analysing the issue, and for a very detailed bug report.
I have tried to explain why this happened below - there have been a lot of changes since 0.2.5, and what you're seeing is due to at least two of those changes, and at least one bug.
On 20 Sep 2016, at 00:09, René Mayrhofer rm@ins.jku.at wrote:
Update: After a hint by Peter Palfrader, I now set the Address option as well:
root@tor2 ~ # grep Address /etc/tor/torrc Address 193.171.202.146 OutboundBindAddress 193.171.202.150
This seems to work with 0.2.8.7-1, so we should be up and running with a recent version now. However, we did not set Address before, exactly because we have two addresses assigned for the Tor exit node on this host (as opposed to being behind a NAT gateway with port forwarding with different internal and externally visible addresses, which might be the more common case). Is a setup like ours supported with explicitly setting Address?
With setting Address, yes. Without setting Address, not yet, at least not reliably.
It it always hard for Tor to guess addresses. We've been trying to work out how to make it easier and more reliable. I think you've also identified a new bug where Tor is unintentionally re-ordering addresses.
On 19 Sep 2016, at 23:14, René Mayrhofer rm@ins.jku.at wrote:
Dear Tor developers,
[Please CC me in replies, I am not currently subscribed to tor-dev.]
Context: At the Institute of Networks and Security at Johannes Kepler University Linz, we have been hosting Austria's fastest exit node for the last ca. 9 months. It used to be listed as https://atlas.torproject.org/#details/01A9258A46E97FF8B2CAC7910577862C14F2C5... until very recently, and we tried to find out what went wrong when we saw traffic drop sharply a bit over a week ago. Unfortunately, two out of three people responsible for running this node were on holidays, so we could only start investigating today.
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
This isn't the default setup, but it's actually quite common, particularly for Exit relays that want to segregate their outbound traffic from their public relay address.
Problem: This worked nicely with Tor 0.2.5.12-1 on Debian Jessie. We upgraded about two weeks ago to 0.2.8.7-1 from the Tor apt repositories (mostly in response to https://blog.torproject.org/blog/tor-0287-released-important-fixes as a wakeup call that we were using old versions from Debian main).
Thanks for upgrading! We know that it takes effort, and time to re-establish relay flags.
At first, it seemed to work well enough, but then the holidays came and we didn't actively watch it for the next week.... Now with 0.2.8.7-1, the traffic sent to our node started declining until it vanished completely. After a bit of debugging and rolling back to 0.2.5.12-1 (which is now active on our node as of a few hours ago, slowly approaching the 200MBit/s again), it seems that we discovered a regression concerning the handling of sockets. I can best summarize it with the relevant torrc config options and startup log lines from both versions:
root@tor2 ~ # grep 193.171.202 /etc/tor/torrc ORPort 193.171.202.146:9001 ORPort 193.171.202.146:443 OutboundBindAddress 193.171.202.150 DirPort 193.171.202.146:9030
Since you don't set your IPv4 address using Address, this means that Tor tries to guess your address. On a machine with multiple IPv4 addresses, this means it might not guess the address you expect.
I think that 0.2.5 only looked at the first interface the OS returned, and that happened to be the one you wanted. But guessing using interface addresses is never going to be reliable on multi-IPv4 machines.
Between 0.2.5 and 0.2.8, the address guessing code was modified several times. It now looks at all your local network interfaces to guess the address. I think there was an unintentional ordering change. We can fix that (see below).
In 0.2.9, you will also get a warning when your ORPort bind address and guessed Address don't match: https://trac.torproject.org/projects/tor/ticket/13953
But I also think we should warn when Tor guesses between multiple addresses, because some operators are going to find that Tor guesses one they don't want: https://trac.torproject.org/projects/tor/ticket/20164
We also have a ticket open to change Tor to do exactly what most relay operators expect, which is use the ORPort IPv4 address to guess address, before using more unreliable methods like interfaces. (This is also what we do to find the IPv6 address in the descriptor - use the first IPv6 ORPort address.) But no-one has written the code for it yet: https://trac.torproject.org/projects/tor/ticket/19919
Sep 19 11:37:41.000 [notice] Tor 0.2.8.7 (git-cc2f02ef17899f86) opening log file. Sep 19 11:37:41.194 [notice] Tor v0.2.8.7 (git-cc2f02ef17899f86) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1t and Zlib 1.2.8. ... Sep 19 11:37:41.198 [warn] Tor is running as an exit relay. If you did not want this behavior, please set the ExitRelay option to 0. If you do want to run an exit Relay, please set the ExitRelay option to 1 to disable this warning, and for forward compatibility. Sep 19 11:37:41.198 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason. Sep 19 11:37:41.199 [notice] Opening Socks listener on 0.0.0.0:9050
I hope you have a SOCKSPolicy in place (or the equivalent firewall rules), otherwise anyone can use your relay as an unencrypted, open proxy.
Sep 19 11:37:41.199 [notice] Opening Control listener on 127.0.0.1:9051 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:9001 Sep 19 11:37:41.199 [notice] Opening OR listener on 193.171.202.146:443 Sep 19 11:37:41.199 [notice] Opening Directory listener on 193.171.202.146:9030 ... Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 11:38:30.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent.
This is an interesting edge-case: Tor doesn't (and likely can't) check that the ORPort a client thinks it is connecting to, is the same as the one it just advertised. So the ORPort reachability check succeeded on your relay, because some clients still had the old address in the old descriptor. And Tor never repeats the check.
https://trac.torproject.org/projects/tor/ticket/20165
... Sep 19 11:57:50.000 [warn] Your server (193.171.202.150:9030) has not managed to confirm that its DirPort is reachable. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
But the DirPort check fails, because it uses the address in the descriptor. And since 0.2.8.1-alpha, the DirPort needs to be reachable for relays to publish their descriptor:
o Minor bugfixes (relays): - Check that both the ORPort and DirPort (if present) are reachable before publishing a relay descriptor. Otherwise, relays publish a descriptor with DirPort 0 when the DirPort reachability test takes longer than the ORPort reachability test. Fixes bug 18050; bugfix on 0.1.0.1-rc. Reported by "starlight", patch by "teor".
... Sep 19 12:01:48.000 [notice] Tor 0.2.5.12 (git-3731dd5c3071dcba) opening log file. ... Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) Sep 19 12:01:53.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Sep 19 12:01:53.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor. ...
Please note the difference (0.2.8.7): Sep 19 11:37:51.000 [notice] Now checking whether ORPort 193.171.202.150:9001 and DirPort 193.171.202.150:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success) vs. (0.2.5.12): Sep 19 12:01:52.000 [notice] Now checking whether ORPort 193.171.202.146:9001 and DirPort 193.171.202.146:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
I.e. 0.2.8.7 does not seem to honor the address the socket is bound to when starting the reachability checks from the outside
Tor never used the address the socket was bound to. It just guesses the one you want in 0.2.5, and a different one in 0.2.8.
Tor can't actually use the bind address for reachability checks, because there's only one IPv4 address in a relay descriptor, and that's the one that other tor instances will try to connect to. Also, what if the ORPort and DirPort are on different addresses? What if the relay is behind a NAT? There's a discussion of these kinds of issues here: https://trac.torproject.org/projects/tor/ticket/17782
(it seems to use the address that either the default route is associated with or the OutboundBindAddress) - although the socket binding itself is done correctly (i.e. the netstat output is exactly the same for both versions, with tor binding to the specific IP address only for the Dir and both OR ports). Consequently, the node is declared as non-reachable and drops off the globe/atlas...
Has this change been intentional? I have to admit we have not yet checked the source code for further debugging, as we wanted to get the node back up as quickly as possible (after our unfortunately timed holidays, sorry for that).
No, I don't think the change was intentional. It could have been any of the changes below that caused this issue, but I would guess it's probably an unintentional result of commit 31eb486 in 17027, which inadvertently reorders the address list by using SMARTLIST_DEL_CURRENT() rather than SMARTLIST_DEL_CURRENT_KEEPORDER().
I've logged a ticket so we can fix this: https://trac.torproject.org/projects/tor/ticket/20163
In 0.2.8.1-alpha:
o Minor features (relay, address discovery): - Add a family argument to get_interface_addresses_raw() and subfunctions to make network interface address interogation more efficient. Now Tor can specifically ask for IPv4, IPv6 or both types of interfaces from the operating system. Resolves ticket 17950. - When get_interface_address6_list(.,AF_UNSPEC,.) is called and fails to enumerate interface addresses using the platform-specific API, have it rely on the UDP socket fallback technique to try and find out what IP addresses (both IPv4 and IPv6) our machine has. Resolves ticket 17951.
And 0.2.7.1-alpha:
o Minor bugfixes (security, exit policies): - ExitPolicyRejectPrivate now also rejects the relay's published IPv6 address (if any), and any publicly routable IPv4 or IPv6 addresses on any local interfaces. ticket 17027. Patch by "teor". Fixes bug 17027; bugfix on 0.2.0.11-alpha.
o Minor bugfixes (network): - When attempting to use fallback technique for network interface lookup, disregard loopback and multicast addresses since they are unsuitable for public communications.
o Code simplification and refactoring: - Move the hacky fallback code out of get_interface_address6() into separate function and get it covered with unit-tests. Resolves ticket 14710.
And 0.2.6.3-alpha:
o Minor bugfixes (portability): - Fix the ioctl()-based network interface lookup code so that it will work on systems that have variable-length struct ifreq, for example Mac OS X.
- Refactor the get_interface_addresses_raw() doom-function into multiple smaller and simpler subfunctions. Cover the resulting subfunctions with unit-tests. Fixes a significant portion of issue 12376.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org
Wow, thanks for the quick and detailed answer!
Am 2016-09-19 um 16:32 schrieb teor:
This isn't the default setup, but it's actually quite common, particularly for Exit relays that want to segregate their outbound traffic from their public relay address.
Good to know that we aren't doing anything that's too far off the normal case.
root@tor2 ~ # grep 193.171.202 /etc/tor/torrc ORPort 193.171.202.146:9001 ORPort 193.171.202.146:443 OutboundBindAddress 193.171.202.150 DirPort 193.171.202.146:9030
Since you don't set your IPv4 address using Address, this means that Tor tries to guess your address. On a machine with multiple IPv4 addresses, this means it might not guess the address you expect.
I think that 0.2.5 only looked at the first interface the OS returned, and that happened to be the one you wanted. But guessing using interface addresses is never going to be reliable on multi-IPv4 machines.
Between 0.2.5 and 0.2.8, the address guessing code was modified several times. It now looks at all your local network interfaces to guess the address. I think there was an unintentional ordering change. We can fix that (see below).
Thanks for the detailed explanation. It is now clearer to me that we should have set Address all along (I misinterpreted the option in the beginning, thinking only of NAT cases).
Sep 19 11:37:41.199 [notice] Opening Socks listener on 0.0.0.0:9050
I hope you have a SOCKSPolicy in place (or the equivalent firewall rules), otherwise anyone can use your relay as an unencrypted, open proxy.
Yes, both ;) We intentionally use that exit node as an entry for some clients directly at the institute, mostly for two reasons: a) we like the eat-your-own-dog-food policy: if the node stops accepting traffic at all, we should notice fairly quickly even without our monitoring systems alerting anybody (although both did not catch this particular case, and we are working on adding more monitoring rules to catch that in the future...); and b) to increase the k in k-anonymity specifically for this node. Incoming traffic could be from anybody, including ourselves, so that's an additional reason not to (be forced to) monitor (an admittedly weak legal defense, but it may assist at some future point).
This is an interesting edge-case: Tor doesn't (and likely can't) check that the ORPort a client thinks it is connecting to, is the same as the one it just advertised. So the ORPort reachability check succeeded on your relay, because some clients still had the old address in the old descriptor. And Tor never repeats the check.
Interesting. I admit it did lead me on a false trail during debugging (we checked some hops in between to see if they started blocking port 9030 before resorting to rolling back, which is when I noticed the different log messages - and it didn't help that nmap doesn't have 9030 in it's default TCP ports list either...).
Tor can't actually use the bind address for reachability checks, because there's only one IPv4 address in a relay descriptor, and that's the one that other tor instances will try to connect to. Also, what if the ORPort and DirPort are on different addresses? What if the relay is behind a NAT?
That's a lot clearer to me now. Thanks.
best regards, Rene
On Mon, Sep 19, 2016 at 10:32 AM, teor teor2345@gmail.com wrote:
But I also think we should warn when Tor guesses between multiple addresses, because some operators are going to find that Tor guesses one they don't want:
Might help to emit a simple table on startup with - explicitly configured addrs / ports - guessed addrs / ports vs potential config statements to use to lock those down - unconfigured and unguessed config statements re IP/port config
Hello,
On 9/19/2016 4:14 PM, René Mayrhofer wrote: [SNIP]
Problem: This worked nicely with Tor 0.2.5.12-1 on Debian Jessie. We upgraded about two weeks ago to 0.2.8.7-1 from the Tor apt repositories (mostly in response to https://blog.torproject.org/blog/tor-0287-released-important-fixes as a wakeup call that we were using old versions from Debian main). At first, it seemed to work well enough, but then the holidays came and we didn't actively watch it for the next week.... Now with 0.2.8.7-1, the traffic sent to our node started declining until it vanished completely. After a bit of debugging and rolling back to 0.2.5.12-1 (which is now active on our node as of a few hours ago, slowly approaching the 200MBit/s again), it seems that we discovered a regression concerning the handling of sockets. I can best summarize it with the relevant torrc config options and startup log lines from both versions:
root@tor2 ~ # grep 193.171.202 /etc/tor/torrc ORPort 193.171.202.146:9001 ORPort 193.171.202.146:443 OutboundBindAddress 193.171.202.150 DirPort 193.171.202.146:9030
Yes this is an issue for how we guess Address in some cases. It was initially reported here:
https://trac.torproject.org/projects/tor/ticket/13953
We made the first step towards fixing it (nice patch by teor) and now we log a warning when the address we listen on does not match the one in the descriptor, and the self test doesn't pass so descriptor is not published at all.
We will fix this entirely in this ticket: https://trac.torproject.org/projects/tor/ticket/19919
Where we will use the first explicit public IP address configured with ORPort that we listen on as being Address.
I wanted to create a separate ticket for doing the same with OutboundBindAddress (use the first explicit public IP address configured with ORPort that we listen on as being OutboundBindAdress) -- but I see in your setup this would not fix it anyway, so we will leave it aside for the moment. I think OutboundBindAddress overwrites Address for outgoing connections, so unless otherwise configured OutboundBindAddress == Address.
Thanks for running Austria's fastest exit -- this rocks!
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 11:38:34.000 [warn] You specified a server "ins1" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$CD9FD887A4572D46938640BA65F258851F1E418B". Sep 19 11:38:34.000 [warn] You specified a server "ins2" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$7C3AF46F77445A0B1E903A5AF5B730A05F127BFC".
A side note, unless you have a reason not to, or the other nodes are offline, you should fix up the MyFamily lines in the configs of your nodes, to at least save noise in your logs.
Am 2016-09-19 um 20:01 schrieb grarpamp:
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins1" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [warn] I have no descriptor for the router named "ins2" in my declared family; I'll use the nickname as is, but this may confuse clients. Sep 19 11:37:41.000 [notice] Your Tor server's identity key fingerprint is 'ins0 01A9258A46E97FF8B2CAC7910577862C14F2C524' Sep 19 11:38:34.000 [warn] You specified a server "ins1" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$CD9FD887A4572D46938640BA65F258851F1E418B". Sep 19 11:38:34.000 [warn] You specified a server "ins2" by name, but the directory authorities do not have any key registered for this nickname -- so it could be used by any server, not just the one you meant. To make sure you get the same server in the future, refer to it by key, as "$7C3AF46F77445A0B1E903A5AF5B730A05F127BFC".
A side note, unless you have a reason not to, or the other nodes are offline, you should fix up the MyFamily lines in the configs of your nodes, to at least save noise in your logs.
You're right, should have done that long ago. For the record, those two fingerprints are Tor exit/relay nodes that we test on small ARM boxes (Odroid XU3 at the moment) for trying out new configuration and experimenting e.g. with IPv6. Unfortunately, as nice as it would have been to run the main relay on a 5W computer, it simply couldn't push the bandwidth we wanted to provide (and we haven't spent enough time figuring out the bottlenecks why one ARM core maxes out and the remaining ones remain at around 20-30%). Nonetheless, since they are all co-located in the same server closet, traffic should never go through more than one at any time.
best regards, Rene
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
There could be further benefit / flexibility in a 'proposed patch' that would allow to take the incoming ORport traffic and further split it outbound by a) OutboundBindAddressInt that which is going back internal to tor, and b) OutboundBindAddressExt that which is going out external to clearnet. Those two would include port specification for optional use on the same IP. I do not recall if this splitting is currently possible.
Am 2016-09-19 um 20:24 schrieb grarpamp:
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
There could be further benefit / flexibility in a 'proposed patch' that would allow to take the incoming ORport traffic and further split it outbound by a) OutboundBindAddressInt that which is going back internal to tor, and b) OutboundBindAddressExt that which is going out external to clearnet. Those two would include port specification for optional use on the same IP. I do not recall if this splitting is currently possible.
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
best regards, Rene
On Mon, Sep 19, 2016 at 5:36 PM, René Mayrhofer rm@ins.jku.at wrote:
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
Part of rationale could be 'Hi bigwigs... stats say we helped 83GB traffic move strictly to clearnet today without severe issue, please keep us funded.' Another part is simply traffic engineering bandwidth cost, and possibly in your edu case I2 routing. ToS tagging is interesting approach. Though I think for more common operators at hosters, the IP/port approach would work better. Not to say both cannot be added :)
Hi everybody,
Unfortunately, it took a bit longer than expected, but here goes... FWIW, after the recent update (with subsequent downtime), our exit node is fully up and running again (including this patch) and relaying over 1TB a day at the moment.
Am 2016-09-19 um 23:36 schrieb René Mayrhofer:
Am 2016-09-19 um 20:24 schrieb grarpamp:
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
There could be further benefit / flexibility in a 'proposed patch' that would allow to take the incoming ORport traffic and further split it outbound by a) OutboundBindAddressInt that which is going back internal to tor, and b) OutboundBindAddressExt that which is going out external to clearnet. Those two would include port specification for optional use on the same IP. I do not recall if this splitting is currently possible.
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
[Slightly long description of our setup to provide sufficient context for the patch] Attached you will find a PDF (sorry about the image artefacts, MS Office vs. Libreoffice, etc.) describing our rough setup. The whole setup (Tor node(s), monitoring server, switch, firewall, and soon a webcam watching the rack with an unfiltered live-stream publicly available) is in a separate small server room that does not host any other hardware. We use an IPv4 range separate from the main university network (which is the main reason why we don't relay IPv6 yet - we still have to acquire a separate IPv6 range so as not to impact the reputation of the main university subnet). We are highly thankful to the Johannes Kepler University Linz and the Austrian ACOnet for supporting this!
Ideally, we would use 2 different providers to even further compartmentalize "incoming" (i.e. encrypted Tor network) from "outgoing" (for our exit node, mostly clearnet) traffic and make traffic correlation harder (doesn't help against a global adversary as we know, but at least a single ISP would not be able to directly correlate both sides of the relay). Although we don't have two different providers at this point, we still use two different network interfaces with associated IP addresses (one advertised as the Tor node for incoming traffic, and the other one with the default route assigned for outgoing traffic). This has two main reasons (and a few minor ones listed in the PDF): * Technical: In the current project for statistical traffic analysis (which is the reason for running the exit node, and the reason for the gracious support by ACOnet), we are interested only in exit traffic leaving the Tor network (i.e. into the "clear" net). We explicitly do not want to analyze any traffic in which our node is an entry or middle relay or traffic involving hidden services. This statistical analysis is not done on the Tor node itself, but on a separate monitoring host (more on that below). * Legal: In case of a court order, it may be harder to compel us to start monitoring incoming as well as outgoing traffic, as our system architecture currently doesn't allow that. In other words, adding traffic correlation would be more than adding or removing a filter on the monitoring host, but require a significant change in our setup. That may raise the bar for a corresponding legal order (not that we have received _any_ legal order concerning our node so far, this is really just another layer of protection).
The monitoring server collects - anonymized - statistical data by watching the outgoing interface. There is another layer of protection in the form of a passive network tap: the switch is configured so as to mirror traffic between the Tor node outgoing interface and the upstream firewall to a network port on which the monitoring server can passively sniff. That is, with this setup we cannot tamper with the (incoming or outgoing) traffic in any way (another hurdle for potential legal orders). On the monitoring server, we strip IP target addresses and only record statistics on port numbers, AS numbers, and countries (based on a local geoip database, without any external queries). The statistics are computed using monthly batch jobs (we can barely aggregate the traffic data in the same time frame that we collect netflows...) and are online at https://www.ins.tor.net.eu.org/tor-info/index.html. We are still in the process of fully automating the aggregation over anonymized netflows, which is why the latest time frame fully analyzed is June 2016 at the time of this writing. An academic paper on the collected traffic statistics is to be submitted within the next few weeks (showing e.g. that nearly all traffic that we see is with a very high probability legal in our jurisdiction and that the percentage of encrypted traffic is slowly but steadily increasing). In the spirit of full transparency, we have yet another precaution in place in the form of different responsibilities: Michael Sonntag is the only person with remote access to the monitoring server, and he is running the data analysis. Rudolf Hörmanseder is the only person with remote access to the switch and firewall. I am the only person with remote access to the Tor node itself (and as a full, appointed professor at an Austrian university, this falls under my right to research and may be legally hard to forbid). In other words, none of us could, without colluding with another person, increase the set of data items being monitored/analyzed. Anybody with physical access could of course make arbitrary changes to all parts of the setup, which is why we intend to put a live webcam into that server room. We will also publish a more complete description of our technical and legal setup including the specific reasoning in an Austrian/European jurisdiction.
[The patch] Currently, both (clearnet) exit traffic as well as encrypted Tor traffic (to other nodes and hidden services) will use the outgoing interfaces, as the Tor daemon simply creates TCP sockets and uses the default route (which points at the outgoing interface). A patch as suggested by grarpamp above could solve that issue. In the meantime, we have created a slightly hacky patch as attached. The simplest way to only record exit traffic and separate that from outgoing Tor traffic seemed to mark those packets with a ToS value - which, as far as we can see, can be done with a minimally invasive patch adding that option at a single point in connection.c. At the moment, we use this ToS value in a filter expression at the monitoring server to make sure that we do not analyze outgoing Tor traffic. We also plan to also use it for policy routing rules at the Linux kernel level to send outgoing Tor traffic back out the "incoming" interface (to distinguish between Tor traffic and clear traffic). When that works, the ToS flag can actually be removed again before the packets leave the Tor node. What do you think of that approach? Does that seem reasonable or would there be a cleaner approach to achieve that kind of separation of exit traffic from other traffic for analysis purposes? If this patch seems useful, we can extend it to make this marking configurable for potential upstream inclusion.
Rene (Head of the Institute for Networks and Security at JKU)
On 23 Sep 2016, at 13:02, René Mayrhofer rm@ins.jku.at wrote:
Hi everybody,
Unfortunately, it took a bit longer than expected, but here goes... FWIW, after the recent update (with subsequent downtime), our exit node is fully up and running again (including this patch) and relaying over 1TB a day at the moment.
Thanks for running a fast Tor Exit!
Am 2016-09-19 um 23:36 schrieb René Mayrhofer:
Am 2016-09-19 um 20:24 schrieb grarpamp:
On Mon, Sep 19, 2016 at 9:14 AM, René Mayrhofer rm@ins.jku.at wrote:
Setup: Please note that our setup is a bit particular for reasons that we will explain in more detail in a later message (including a proposed patch to the current source which has been pending also because of the holiday situation...). Briefly summarizing, we use a different network interface for "incoming" (Tor encrypted traffic) than for "outgoing" (mostly clearnet traffic from the exit node, but currently still includes outgoing Tor relay traffic to other nodes). The outgoing interface has the default route associated, while the incoming interface will only originate traffic in response to those incoming connections. Consequently, we let our Tor node only bind to the IP address assigned to the incoming interface 193.171.202.146, while it will initiate new outgoing connections with IP 193.171.202.150.
There could be further benefit / flexibility in a 'proposed patch' that would allow to take the incoming ORport traffic and further split it outbound by a) OutboundBindAddressInt that which is going back internal to tor, and b) OutboundBindAddressExt that which is going out external to clearnet. Those two would include port specification for optional use on the same IP.
Binding to a particular source port is a bad idea - as the 4-tuple of: (source IP, source port, destination IP, destination port) must be unique, this would mean that the Exit could only make one connection per destination IP and port - which would prevent multiple clients querying the same website at the same time.
I do not recall if this splitting is currently possible.
No, it's not.
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents.
I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.)
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function.
if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } }
Ideally, we would use 2 different providers to even further compartmentalize "incoming" (i.e. encrypted Tor network) from "outgoing" (for our exit node, mostly clearnet) traffic and make traffic correlation harder (doesn't help against a global adversary as we know, but at least a single ISP would not be able to directly correlate both sides of the relay). Although we don't have two different providers at this point, we still use two different network interfaces with associated IP addresses (one advertised as the Tor node for incoming traffic, and the other one with the default route assigned for outgoing traffic).
This sounds like an interesting setup. I'd be keen to see how it works out.
Some Exit providers (typically with their own AS) peer with multiple other providers, because this makes it harder for a single network tap to capture all their traffic.
Not quite the same as your setup, because OR and Exit traffic goes over all the links, rather than each going over a separate link.
... [The patch] Currently, both (clearnet) exit traffic as well as encrypted Tor traffic (to other nodes and hidden services) will use the outgoing interfaces, as the Tor daemon simply creates TCP sockets and uses the default route (which points at the outgoing interface). A patch as suggested by grarpamp above could solve that issue. In the meantime, we have created a slightly hacky patch as attached. The simplest way to only record exit traffic and separate that from outgoing Tor traffic seemed to mark those packets with a ToS value - which, as far as we can see, can be done with a minimally invasive patch adding that option at a single point in connection.c. At the moment, we use this ToS value in a filter expression at the monitoring server to make sure that we do not analyze outgoing Tor traffic. We also plan to also use it for policy routing rules at the Linux kernel level to send outgoing Tor traffic back out the "incoming" interface (to distinguish between Tor traffic and clear traffic). When that works, the ToS flag can actually be removed again before the packets leave the Tor node.
Binding to different IP addresses can also be used for filtering and traffic redirection. Does having separate bind addresses for OR and Exit traffic work for your use case?
What do you think of that approach? Does that seem reasonable or would there be a cleaner approach to achieve that kind of separation of exit traffic from other traffic for analysis purposes? If this patch seems useful, we can extend it to make this marking configurable for potential upstream inclusion.
Rene (Head of the Institute for Networks and Security at JKU)
T
-- Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents.
Fully agreed. That is why - if this turns out to be the best approach - we would remove the QoS tag again before the packets leave the host (and only use it for local policy routing decisions).
I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.)
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function.
if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } }
<snip>
Binding to different IP addresses can also be used for filtering and traffic redirection. Does having separate bind addresses for OR and Exit traffic work for your use case?
Yes, separate IP addresses for OutboundBindAddressOR (which we would set to our "incoming" interface address) and OutboundBindAddressExit (which we would set to our "outgoing" interface address) would work for our use case. One caveat is that we would then no longer have the mixing of relay and exit traffic (which overlaps e.g. on common ports like 80) on our outgoing interface/IP address. Without having analyzed it in detail, our gut feeling was that this mixing (if the QoS flag is removed) may actually be beneficial against traffic correlation attacks and/or filtering/scanning the exit traffic by upstream providers (because it would required DPI as the more costly version to distinguish e.g. Tor relay from HTTPS traffic). If this assumption is unwarranted and you don't see additional information leakage by separating relay and exit traffic by IP (and as mentioned, we have not thought about this systematically enough yet), then this patch would solve our issue. I assume it would need additional changes to add the new OutboundBindAddressOR and OutboundBindAddressExit options to the config parser?
best regards, Rene
On 26 Sep 2016, at 05:43, René Mayrhofer rm@ins.jku.at wrote:
That is exactly what we have patched our local Tor node to do, although with a different (slightly hacky, so the patch will be an RFC type) approach by marking real exit traffic with a ToS flag to leave the decision of what to do with it to the next layer (in our setup Linux kernel based policy routing on the same host). There may be a much better approach do achieve this goal. I plan on writing up our setup (and the rationale behind it) along with the "works for me but is not ready for upstream inclusion" patch tomorrow.
I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents.
Fully agreed. That is why - if this turns out to be the best approach - we would remove the QoS tag again before the packets leave the host (and only use it for local policy routing decisions).
I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.)
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function.
if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } }
<snip> > Binding to different IP addresses can also be used for filtering and > traffic redirection. Does having separate bind addresses for OR and Exit > traffic work for your use case? Yes, separate IP addresses for OutboundBindAddressOR (which we would set to our "incoming" interface address) and OutboundBindAddressExit (which we would set to our "outgoing" interface address) would work for our use case. One caveat is that we would then no longer have the mixing of relay and exit traffic (which overlaps e.g. on common ports like 80) on our outgoing interface/IP address. Without having analyzed it in detail, our gut feeling was that this mixing (if the QoS flag is removed) may actually be beneficial against traffic correlation attacks and/or filtering/scanning the exit traffic by upstream providers (because it would required DPI as the more costly version to distinguish e.g. Tor relay from HTTPS traffic). If this assumption is unwarranted and you don't see additional information leakage by separating relay and exit traffic by IP (and as mentioned, we have not thought about this systematically enough yet), then this patch would solve our issue.
I can't see it being too much of an issue. Tor does not attempt to defend against DPI of plain-text traffic.
And, as of 0.2.8, Tor clients will only ever make encrypted connections, even to fetch directory documents. This means that there will be very little unencrypted client traffic on port 80 by the time any patch like this is merged and appears on a lot of relays. (And it would only be useful to Exit operators with multiple IP addresses.)
I assume it would need additional changes to add the new OutboundBindAddressOR and OutboundBindAddressExit options to the config parser?
Yes. The config parser is table-driven, and populates a struct. You will need to add lines to the table and variables to the struct.
Reading the existing code for OutboundBindAddress should help, although it is a complex option, because it can be specified twice, and an IPv4 address is parsed to OutboundBindAddressIPv4_, but an IPv6 address is parsed to OutboundBindAddressIPv6_.
It would be best to refactor this parsing code, rather than duplicating it twice for the OutboundBindAddressOR and OutboundBindAddressExit options.
Tim
best regards, Rene
tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
T
-- Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org
Hi everybody,
Michael Sonntag has extended the patch below to make it configurable and tested it on a separate instance here in Linz. Seems to work for our use case. If that seems a good option, then we'd like to request review for upstream inclusion.
------------------------------------------------------------------------------------------ Explanation of the patch: * Two new configuration options, but retain the old option for compatibility. Old configurations therefore remain fully valid. Configuration parsing is therefore a bit long, but not complex. * Ease of update: The old and new options can exist simultaneously, as long as they are for different protocols (IPv4 or IPv6) * Resiliency: If one address is missing, the other is substituted. Things should work at least somehow, even if some configuration is absent. Use the default only if no configuration for output traffic exists at all. * New function introduced for selecting the address to bind to, to factor out this logic. * Changes to other places the address was used before (policies.c and test_policy.c). * Documentation of new options and sample configuration added ------------------------------------------------------------------------------------------
best regards, Rene
On 2016-09-26 00:54, teor wrote:
I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents.
I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.)
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function.
if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } }
Ideally, we would use 2 different providers to even further compartmentalize "incoming" (i.e. encrypted Tor network) from "outgoing" (for our exit node, mostly clearnet) traffic and make traffic correlation harder (doesn't help against a global adversary as we know, but at least a single ISP would not be able to directly correlate both sides of the relay). Although we don't have two different providers at this point, we still use two different network interfaces with associated IP addresses (one advertised as the Tor node for incoming traffic, and the other one with the default route assigned for outgoing traffic).
This sounds like an interesting setup. I'd be keen to see how it works out.
Some Exit providers (typically with their own AS) peer with multiple other providers, because this makes it harder for a single network tap to capture all their traffic.
Not quite the same as your setup, because OR and Exit traffic goes over all the links, rather than each going over a separate link.
... [The patch] Currently, both (clearnet) exit traffic as well as encrypted Tor traffic (to other nodes and hidden services) will use the outgoing interfaces, as the Tor daemon simply creates TCP sockets and uses the default route (which points at the outgoing interface). A patch as suggested by grarpamp above could solve that issue. In the meantime, we have created a slightly hacky patch as attached. The simplest way to only record exit traffic and separate that from outgoing Tor traffic seemed to mark those packets with a ToS value - which, as far as we can see, can be done with a minimally invasive patch adding that option at a single point in connection.c. At the moment, we use this ToS value in a filter expression at the monitoring server to make sure that we do not analyze outgoing Tor traffic. We also plan to also use it for policy routing rules at the Linux kernel level to send outgoing Tor traffic back out the "incoming" interface (to distinguish between Tor traffic and clear traffic). When that works, the ToS flag can actually be removed again before the packets leave the Tor node.
Binding to different IP addresses can also be used for filtering and traffic redirection. Does having separate bind addresses for OR and Exit traffic work for your use case?
On 20 Oct. 2016, at 20:19, René Mayrhofer rm@ins.jku.at wrote:
Hi everybody,
Michael Sonntag has extended the patch below to make it configurable and tested it on a separate instance here in Linz. Seems to work for our use case. If that seems a good option, then we'd like to request review for upstream inclusion.
Hi Rene / Michael,
Thanks for this patch. It seems like a useful feature, particularly for Exit operators.
I have some comments on the patch, but I'd like to make them using a better interface than email. We've found that it's hard to work out if all the comments have been dealt with when using email or the bug tracker for code reviews.
We generally track tor features using the Tor bug tracker: https://trac.torproject.org/projects/tor/newticket Use the Core Tor/Tor component.
We have found that gitlab is useful for making comments on patches. But I am also happy to use GitHub, or any other system you would like.
If you're ok with it, I can set up a branch here on gitlab with your patch: https://gitlab.com/teor/tor/merge_requests
Tim
Explanation of the patch:
- Two new configuration options, but retain the old option for
compatibility. Old configurations therefore remain fully valid. Configuration parsing is therefore a bit long, but not complex.
- Ease of update: The old and new options can exist simultaneously, as
long as they are for different protocols (IPv4 or IPv6)
- Resiliency: If one address is missing, the other is substituted.
Things should work at least somehow, even if some configuration is absent. Use the default only if no configuration for output traffic exists at all.
- New function introduced for selecting the address to bind to, to
factor out this logic.
- Changes to other places the address was used before (policies.c and
test_policy.c).
- Documentation of new options and sample configuration added
best regards, Rene
On 2016-09-26 00:54, teor wrote:
I'm not sure if we want to tag Tor traffic with QoS values at Exits. Any tagging carries some degree of risk, because it makes traffic look more unique. I'm not sure how much of a risk QoS tagging represents.
I would prefer to add config options OutboundBindAddressOR and OutboundBindAddressExit, which would default to OutboundBindAddress when not set. (And could be specified twice, once for IPv4, and once for IPv6.)
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
I'd recommend a patch that modifies this section in connection_connect to use OutboundBindAddressOR and OutboundBindAddressExit, preferably with the Exit/OR/(all) and IPv4/IPv6 logic refactored into its own function.
if (!tor_addr_is_loopback(addr)) { const tor_addr_t *ext_addr = NULL; if (protocol_family == AF_INET && !tor_addr_is_null(&options->OutboundBindAddressIPv4_)) ext_addr = &options->OutboundBindAddressIPv4_; else if (protocol_family == AF_INET6 && !tor_addr_is_null(&options->OutboundBindAddressIPv6_)) ext_addr = &options->OutboundBindAddressIPv6_; if (ext_addr) { memset(&bind_addr_ss, 0, sizeof(bind_addr_ss)); bind_addr_len = tor_addr_to_sockaddr(ext_addr, 0, (struct sockaddr *) &bind_addr_ss, sizeof(bind_addr_ss)); if (bind_addr_len == 0) { log_warn(LD_NET, "Error converting OutboundBindAddress %s into sockaddr. " "Ignoring.", fmt_and_decorate_addr(ext_addr)); } else { bind_addr = (struct sockaddr *)&bind_addr_ss; } } }
Ideally, we would use 2 different providers to even further compartmentalize "incoming" (i.e. encrypted Tor network) from "outgoing" (for our exit node, mostly clearnet) traffic and make traffic correlation harder (doesn't help against a global adversary as we know, but at least a single ISP would not be able to directly correlate both sides of the relay). Although we don't have two different providers at this point, we still use two different network interfaces with associated IP addresses (one advertised as the Tor node for incoming traffic, and the other one with the default route assigned for outgoing traffic).
This sounds like an interesting setup. I'd be keen to see how it works out.
Some Exit providers (typically with their own AS) peer with multiple other providers, because this makes it harder for a single network tap to capture all their traffic.
Not quite the same as your setup, because OR and Exit traffic goes over all the links, rather than each going over a separate link.
... [The patch] Currently, both (clearnet) exit traffic as well as encrypted Tor traffic (to other nodes and hidden services) will use the outgoing interfaces, as the Tor daemon simply creates TCP sockets and uses the default route (which points at the outgoing interface). A patch as suggested by grarpamp above could solve that issue. In the meantime, we have created a slightly hacky patch as attached. The simplest way to only record exit traffic and separate that from outgoing Tor traffic seemed to mark those packets with a ToS value - which, as far as we can see, can be done with a minimally invasive patch adding that option at a single point in connection.c. At the moment, we use this ToS value in a filter expression at the monitoring server to make sure that we do not analyze outgoing Tor traffic. We also plan to also use it for policy routing rules at the Linux kernel level to send outgoing Tor traffic back out the "incoming" interface (to distinguish between Tor traffic and clear traffic). When that works, the ToS flag can actually be removed again before the packets leave the Tor node.
Binding to different IP addresses can also be used for filtering and traffic redirection. Does having separate bind addresses for OR and Exit traffic work for your use case?
<patch.txt>_______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
T
-- Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org ------------------------------------------------------------------------------
Am 20.10.2016 um 11:34 schrieb teor:
We generally track tor features using the Tor bug tracker: https://trac.torproject.org/projects/tor/newticket Use the Core Tor/Tor component.
We have found that gitlab is useful for making comments on patches. But I am also happy to use GitHub, or any other system you would like.
If you're ok with it, I can set up a branch here on gitlab with your patch: https://gitlab.com/teor/tor/merge_requests
Certainly - we are in the process of setting up gitlab here for student projects, and will therefore use it anyways. Happy to coordinate that way.
best regards, Rene
On 25 Oct. 2016, at 01:24, René Mayrhofer rm@ins.jku.at wrote:
Am 20.10.2016 um 11:34 schrieb teor:
We generally track tor features using the Tor bug tracker: https://trac.torproject.org/projects/tor/newticket Use the Core Tor/Tor component.
We have found that gitlab is useful for making comments on patches. But I am also happy to use GitHub, or any other system you would like.
If you're ok with it, I can set up a branch here on gitlab with your patch: https://gitlab.com/teor/tor/merge_requests
Certainly - we are in the process of setting up gitlab here for student projects, and will therefore use it anyways. Happy to coordinate that way.
Here is the existing enhancement request for this feature: https://trac.torproject.org/projects/tor/ticket/17975
Tim
best regards, Rene _______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
T
On 2016-09-26 00:54, teor wrote:
The one concern I have about this is that Tor-over-Tor would stick out more, as it would look like Tor coming out the OutboundBindAddressExit IP. But we don't encourage Tor-over-Tor anyway.
ToT is technically not some special tor aware / tunneled relay function, but is instead just like any other application exiting to clearnet over tor, for which obaE is the correct sense. Further, if one can DPI or otherwise identify ToT, it makes no difference what interface or tag it comes from. So this is moot, no worries.
Binding to a particular source port is a bad idea - as the 4-tuple of:
Yeah I was on some multiplexing crack there.